Skip to main content
Gun.io Home

AI-powered superheroes, aka “the humans in the loop” with David (Gonzo) Gonzalez of Ziff.ai

David “Gonzo” Gonzalez is the CEO and Co-founder of Ziff.ai, a deep learning as a service platform. Ziff’s goal: Reduce the complexity required to profit from AI in an enterprise setting.

In this episode, Gonzo and Ledge discuss how to answer the question, “How do I know as a business person if my business model can be enhanced as an AI?”

It comes down to prediction, forecasting, and two areas where the profits really lie: automation, and augmentation. In other words, find opportunities to help the “humans in the loop” be faster and more accurate, and you’ll win.


David Ledgerwood
David Ledgerwood
· 22 min

Transcript

Ledge: Gonzo, welcome. Thanks for joining us today. Why don’t you tell the listeners a little bit about yourself?

Gonzo: Great to be here, Ledge. So, name is David Gonzalez, most folks call me Gonzo, it’s a kindergarten nickname. I am the CEO and co-founder of a company called ZIFF. You can find us online at ziff.ai.

At ZIFF, my business partner, Ben Taylor, and I have been working on creating a essentially Deep Learning as a Service platform. That has matured over the last year and a half into kind of a unique product offering. It’s an AI database.

Our overarching goal in life is to reduce the complexity required to proffer and profit from AI in an enterprise setting. The classic dyad that has been super-productive in software – the product visionary plus the software engineer – can just stay capable and use AI for the things that they want to build. Hopefully, removing the need for the bottleneck that has crept up over the last five years. That infamous or awesome role, depends on who you ask, of data scientist.

So, that’s what we’re doing over here at ZIFF.

Ledge: Awesome. So the ability becomes then for any enabled developer to consume AI Deep Learning as a Service then.

How do I even know, as a business person, that I have a business model that lends itself to being enhanced by AI? What’s the heuristic thought pattern around like, I don’t even know, “That sounds awesome, I should use that.” But, what do I do and how do you start?

Gonzo: Great question. My thinking on this has matured. It’s taken probably longer than it should have. If you guys saw the video here you’d see there’s a little grey in the beard, so I’m not the spring chicken here.

Originally, I pursued advanced analytics because it allowed me to solve fairly high-scale problems with enough throughput to be useful – and this specifically was applied in marketing and advertising. In those settings, your often split second decision is the difference between profit and loss. So advanced analytics, and even before that more classical approaches, were the only way that you could digest data and make a decision fast enough.

Where my thinking has matured is, I approached all of those problems from kind of an automation bias, but the vast majority of business problems actually fit into a prediction forecasting camp. So, when people think about stacks, they think about machine learning, AI, they’re mostly thinking about kind of classic approaches to improving their forecasts and predicted outcomes of certain businesses events.

In that mode of forecasting and prediction, the way I’ve come to see it is advanced analytics today, for the most part, is just an incremental gain on the trajectory of prediction and forecasting.

We’ve had general linear models, we’ve had… Think Excel dropping a regression line on a chart. We’ve had that kind of forecasting for over a hundred years. What machine learning and advanced analytics bring to the table are incrementally better but not a sea change opportunity within an organization.

There is an orthogonal trajectory that does focus primarily on automation efforts and augmentation efforts, and it centers around specific data types; image, audio, video, and text. So, human friendly data. In data science we call this unstructured data.

Structured data fits happily in Excel, it’s in your databases. I like to call it ‘business exhaust’. You do a bunch of processes the structured data is what you collect.

Unstructured data is not often collected, and if it is it’s usually collected in service of something like compliance. Like, we have to make sure we record these phone calls or these images that we took of your house while we assessed its damage so that we can always come back to it later and say we made this decision right. But when we say, “We made this decision,” it’s literally a human in the loop. So for all unstructured problems there’s a human in the loop at some point considering data that is in the system primarily for human consumption; image, audio, and text.

On that trajectory, I feel like we are in a new territory. The advice is always to suss out the opportunities that fit into the camp of deep learning and automation, versus the camp of just standard prediction and forecasting.

If you can find opportunities to help the humans in the loop be faster and more accurate, deep learning is probably going to be a contributor to that effort.

Ledge: I had a conversation recently with a CTO of a health care startup that was using deep learning to look at brain scans and try to identify down to the cellular level what, based upon the literature, what’s the best possible approach to treating a particular kind of malady or mental illness in the brain. That’s a fascinating kind of a use case where human augmentation is the whole point, because a doctor can’t consume tens of thousands of pages and remember and recall the exact right combination of things that was useful in a particular case.

For medical, I imagine there is sort of a body or work around every discipline or sub-discipline that would lend itself to that kind of human augmentation, where there’s too much to process in order to come up with what you conceive of as a logical path. You just can’t process it unless you’re using computing power to enhance that learning.

Does that resonate with the same type of problem set?

Gonzo: I think it does to some degree. I’m always cautious to attribute to AI the ability to do things that a human can’t do. My general take is that it can do things at a scale that humans can’t do them.

You can think of the scale as a time horizon. It can’t comprehend things… A human can’t process X, Y, Z at a sub-second level. They could totally comprehend it, but the humans are going to spend a day or week, a month, analyzing it. So scaling out is usually compressing time with AI.

There are kind of the classic examples coming back from data mining practices. The classic beer and diapers example: Hey, we did a deep learning exercise in retail and it turns out if you put beer and diapers next to each other you sell more of both.

And it’s always kind of like, “Ha-ha, that’s cute. That’s interesting. Who would have guessed?” And I like to bring people back to reality and say, “Yeah, but if I put that into the hands of a new data scientist, my solution is going be to put a diaper display in the beer aisle. And the five o’clock news is going to show up and talk about the travesty of civilization corroding because people are selling diapers to alcoholics, they probably shouldn’t have kids anyway. It goes off the rails. And so you really need to bring the domain expert back in and say, “Okay, well maybe we put the diaper aisle closer to the beer aisle but not the new approach.”

So in that sense, are we going to be able to do things that humans can’t do? No, but we’re going to be able to do things that humans can do at a rate that humans would never be able to do them.

We’re finding that over and over again by introducing unstructured data to the equations. So the image component… With the healthcare example, if you can combine unstructured and structured data, you’re going to find a ridiculous amount of signal.

With our customers what we’ve found over and over again is that the structured components are doing quite well. So, you take your structured data, your columns and rows of data coming out of the database – what we’re going to call metadata – and you do kind of a classic machine learning approach using some kind of gradient boosting regression or something like that. You’re going to get a really high accuracy on being able to predict outcomes. You’re going to get a really high accuracy on being able to assess that information, and potentially properly categorize this patient’s observation as a disease or something like that, some malady.

If you can introduce also the radiography, but not just the radiography but also the notes from the doctor, and you can introduce the notes from doctors, the notes from nurses. When you start to consider all that data…

Again, I would argue that a top-notch physician could consider all that data and make a great judgment call on whether or not this person is plagued with X, Y, Z malady. The key innovation with AI is that, in less than a second that information can be digested and a diagnosis can be coughed up.

That can steer a physician to make a calibrated intervention based on the confidence that the AI has that X, Y, Z malady is at play. The human approach is essentially we have it or we don’t have cancer, and the information coming back from the AI is going to be, “There’s a 76% certainty that the person has cancer.”

So, do you treat a 76% cancer case as aggressively as a 99 case? And my guess is probably not. And so that’s kind of how the augmentation effort comes into play as well.

I don’t know if that answers the question but that’s.

Ledge: Sure. There’s that extra dimension of meta-information from an unstructured. For example, maybe there’s a hundred medical journals and studies that have been conducted and all of those can be ingested and compared against the results to guide the diagnosis. In that way, right?

Gonzo: Precisely.

Ledge: You would have had to have read and internalized and remembered all those things. Also, you get down to – I imagine there’s a challenge with searching. That it would have had to have been tagged in some way. That the human would know how to type in a search result into a traditional sort of LexisNexis, or something.

Does it help solve that problem?

Gonzo: That’s an interesting dimension. We like to say that something that you get for free with deep learning that you don’t necessarily get with classic machine learning on structured data, is that your unstructured is truly comprehended. It’s comprehended at a level that is pretty easy for humans to understand. You get topics.

So you can take image data, you can take radiography data, you can take videos, throw them into a network and we carve up the data into such a way that now all of your information can be sorted and searched. And not how you would think.

Not just, “Hey, we put this x-ray film into the AI and it tagged it with the things and then we’re searching to the tags.” No. It starts to look more like the way a human behaves, where we can actually search x-ray with another x-ray. So we can take an x-ray and say, “I want to find all of the x-rays that are more similar to this one.” And you get results that shows all of the pictures that are the most similar to that one.

You can do this with x-rays. You can do this with pictures of faces. You can do this with pictures of roof damage. You can do this with pictures of car damage. Heck, you can actually do this with YouTube channels.

So, we have applications where the tribal knowledge that is typically trapped in a highly trained expert, not often as well paid as say a radiologist but they are highly trained – maybe they’re a campaign manager, maybe they’re a product marketing guru. These folks have a high degree of tribal knowledge that can only be conveyed through something kind of like apprenticeship.

When you use AI to comprehend your unstructured data, suddenly that tribal knowledge is disseminated to the entire organization, and everybody can say well… Instead of saying, “Has anyone ever seen a case like this?” and the answer is, “Well, you should go to talk to Dr. Joe. That guy has been around forever and he knows everything,” you can actually just literally query the data. Say, “Hey, I want to see all the cases that are similar to this.” You get a stack ranked order list of all of the cases that are much similar and on every conceivable dimension, not just what you thought to tag the case with, if that makes sense.

Ledge: Yeah. That’s fantastic. You can completely see there how the advanced searchability based on, “Find me a thing like this,” without having to specify what is ‘this’ is going to be vastly more powerful to dig into. It’s just a gigantic data set.

Gonzo: It starts to touch upon kind of the long-held belief that we were going to get to semantic search.

We have something like it with text-based searching but, typically when we think of search, computers are really good at what we call syntactic search – so things that are very much similar to each other. Searching for ‘hat’ and ‘hats’, plural. That’s easy. Searching for the express tag of a cancer cell of a certain kind, that makes a lot of sense for us.

What I think we all hope for is that delightful search result. That it finds us the thing that is the intent of what we were looking for but was not something that we could articulate clearly. I wouldn’t call that true semantic ordinate search.

Something like that can happen with images and audio and video-based data where we’re saying, when I say I want to search and find something like this image of a person, I don’t want to have to type in ‘male, Caucasian, age-bracket, whether or not they have stubble.’ I don’t want to type in all of these things. I’d actually just like to put a picture of the person that I want to find and put it into the system and come back with everything that’s most similar.

I think that that kind of power is going to rapidly permeate the enterprise. It’s going to be something that transforms the way most of us work, these processes where we’re in the loop considering a large amount of unstructured data.

Ledge: Great thoughts. Great thoughts. So I usually wrap with the same question. You know we’re in the business of providing some super-senior remote freelance engineers, and I love to ask every tech leader that I talk to, what is your heuristic for understanding if someone is just the top of the top engineer? And then, how and where does the mention of remote work fit into there?

Gonzo: So I’ll start with the second one, where does remote fit in. I’ve had a tremendous success with well identified projects. I think we all think that our projects are well identified, or maybe we hope that they are. It kind of boils down to this question of ownership.

I’ve had tremendous success with talented remote operators when I say, “This is the objective, these are the assessment criteria, and I actually don’t care how you get there.” The classic Delta Force type mentality where it’s like, there is a singular objective and I don’t care how you get there.

To the point of, I almost don’t care what language you’re writing in, I don’t care what framework you use, I just need X, Y, Z done.

When you do that the most talented people often bring back into the organization a completed project that blesses the whole organization with like, “This is a center of excellence. We should all build better stuff like this person did.” And I find that that has paid dividends over and over again on that front.

With kind of the staff augmentation approach, it’s a different mentality. You’re really looking for somebody who can work well with a project manager. I think in that kind of remote oriented team, it does come back down to well identified projects but at that level it’s a task. Very, very well defined task.

When you’re working remotely, unfortunately communication is just not as sound. I would always hope that it would better because it’s more explicit, but what I find is that it’s more explicitly vague because people can’t read each other. So you have to tighten up the communication style, but really you just have to tighten up your ask. When you do that, remote works really, really well.

On the first part of the question, how do you find or how do you identify great technical talent, I’ve been back and forth on everything from code test to just open-ended interview.

Where I’ve gotten to is more – I’m kind of spacing the terminology but – I love to have an interview where I have the person give me a detailed play-by-play of the projects that they worked on and where they’ve worked.

I find that, over and over again, the way somebody articulates their work and how they contributed to it and what were the pitfalls and how they failed and or triumphed or how they failed and then rescued a project, is much more indicative of their chops on a project than most whiteboard coding sessions. I’ve done a lot of those too and I feel like the way somebody attacks code it’s less important than having a long work history of successful projects under their belt. The assumption is they know how to code.

And what I find is if you’ve got somebody worth five to seven years plus under their belt, they know how to code. But if they haven’t learned how to be a good programmer, how to be a good architect, if they haven’t learned how to be a good communicator, then it almost doesn’t matter they code.

Ledge: Fantastic answer. Couldn’t have said it better myself.

Thanks so much, Gonzo. It’s good spending time with you. Best of luck at ZIFF. We’re looking forward to seeing what you guys come up with.

Gonzo: Absolutely, Ledge. Thanks for the questions, spending the time and I look forward to talking with you further.