Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
A huge use case for GPT-3 is in managing large, complex inventory and catalog problems. Anybody who's ever used apps like DoorDash or Uber Eats has probably had an experience like this, which is I really want that particular menu item with specific customizations, but the app doesn’t allow it. Let's say I want steak strips with my particular meal instead of ground beef. In the restaurant, all I have to do is ask for the steak, but when I try to order on the app, that option is missing, the app lists them as the same option, or maybe the price is wrong. There are all sorts of problems with maintaining catalog accuracy. Proper classification of menu items and generally inventory overall has turned out to be a major focus, because it’s not a problem that's unique to on-demand delivery catalogs. It's a problem that appears everywhere with any digital system, and it's another reflection of this broad question: how do we create the right balance of fully automated systems and systems with human review?
One way I explain GPT-3 to people hearing about it for the first time is, what if you took the entire contents of all of the internet, every single word that has ever been seen before, and then you trained a computer to make sense out of all those words and sentences and paragraphs in context. By creating and training that computer, you’ve created what we call a large language model. Then what you'd like to do is input some text as a prompt to that large language model so it can predict what's likely to come next. An even easier example is imagine that you've read every story by Edgar Allen Poe, and then you see the first paragraph of a new story. You can probably imagine where it might go. What GPT-3 is aspiring to do is to be effectively predictive of what might come next given all the knowledge that it's consumed.
GPT-3 is effective for solving large, complex cataloging and inventory management problems, which is especially important in the restaurant and healthcare industries.
The most effective use cases for generative AI tools include pairing its analysis capabilities with the expertise and nuance of human review and feedback.
Richie Cotton: Welcome to DataFramed. This is Richie. Breakthroughs in generating images and text have been the big story for artificial intelligence in the last year, GPT-3, and its derivatives like ChatGPT as well as DALL-E and Stable Diffusion have already had a huge impact in just a few months since that launch.
Today we're going to talk about how businesses and data professionals can make use of these AI technologies as well as how AI and humans can work together. Joining me is Scott Downes, the CTO at Invisible Technologies. He's led engineering, product design and marketing teams at multiple growth stage startups, and he's got a really deep technological knowledge but also a great sense of how technology applies to businesses and the wider world. Let's hear what he has to say.
Hi Scott. Thank you for joining us today. So to begin, Can you give us a little bit of context? Tell us about what you do at Invisible?
Scott Downes: Sure. Invisible Technologies is the full name of the company, and I mentioned that because we're a technology company, but we firmly believe that technology is best when it's invisible. So what does that mean? It means that when we think about what successful execution of a process is for a client, What we want to do is focus on the outcome and results more than on the particular tools or tech that we used.<... See more
So what do we actually do? Our business is focused on mapping processes for our clients and executing them at large scale. So examples of the types of work that we do. . It's pretty broad because we really are stubbornly horizontal in the way that we've built our platform. Our belief is that any significant business problem that you're looking at can probably be better handled if you have a clear map of how it should be executed, and that every process of significant scale is gonna involve some element of human labor and some element of technology automation, even AI and ML techniques.
The way that I often think of it is if you were a scientist on an Arctic exhibition and you took a core sample, you would see all these interesting things in the core sample that you took. If you take a core sample of any highly functioning organization and you pull out a process like say lead generation for a sales team with data enrichment, what you'll find is that.
Combination of integrations with third party platforms like Salesforce. They're smart, intelligent, high-judgment individuals, making decisions about what tools in tech we should be using, serving as like approvers and people who are looking to guarantee quality. But there's also a full set of third party tools that might come into place, or custom automations that enable success.
So just like a normal lead generation process might involve integration with a Salesforce platform. Third-party data enrichment through a tool like ZoomInfo, custom personal review. I mean, when you put all those pieces together, the problem space that we think about with invisible is orchestration. So what's the right balance of people and tech?
That's what we do.
Richie Cotton: Well Brilliant. So I find this interaction between technology and people, processes, it's really fascinating stuff. And I'd love to get into that in more depth in this episode. Before we get to that, can you tell us a little bit about what you do as Chief Technology Officer?
Scott Downes: I have had a number of different roles in my career and I think that like one of the reasons why I love being a CTO for a scaling startup is that it lets me explore all these different areas of my own personality and my own interests that I've had over time. So, Once upon a time, I was a English major in college. I've tried to make a living as a musician. I've worked as a designer. But one of the things that was constant for me from an early age was writing code. I loved programming from elementary school, so as I got older, , I didn't have the same enthusiasm over software engineering as a career prospect that some folks have these days, a little older. So growing up in the eighties and nineties, we didn't see programming as cool until the .com era sort of hit and all of a sudden there were a lot of folks like me who. Diverse interests and skills who saw technology like software development as a way to scratch all those itches or deal with all those different kind of passions in a centralized way.
So if you were a programmer who you know, could write and communicate, but also who had an interest in design and a passion for how that should work. Understood business and wanted to solve interesting problems. All of a sudden you became a really valuable person, and I've just been on that path ever since.
So some of the reasons why I love my job is that in a given day I might touch, you know, eight different areas of focus. So I might be working with the design team on a design review. I might be looking at an E with a data engineering team, or talking to the React developers about a front end application that we're.
I might be talking with a product team about strategy or an executive team about corporate strategy in our business model. So I'm just kind of addicted to that diversity of interest. So that's why I do it. But I guess to answer your actual question more practically, I'm responsible for engineering product and design. I'm also running marketing for the moment. We're gonna pass that off. I've maintained some stewardship of marketing in multiple companies over the years. I'm really passionate about building great software product.
Richie Cotton: That's cool. Uh, certainly having that balance of doing technical things and doing creative things where you're interacting with humans, that's something that appeals to me. Alright, so let's go back to your, what you were talking about before about invisible technologies doing this mix of things with technology and with processes.
And the technology side, of course, is built on GPT3. So I know there's been very hyped recently, but for those people who haven't heard about GPT3, can you just give a little overview of what it involves?
Scott Downes: Sure. Well, first of all, I'll say we're fans of all cool technology. We're heavy users of all sorts of platforms. RPA tools that we use. I don't wanna start listing names or companies or tools that we work with because the list is very long and I don't wanna forget anybody. But when we think about how to solve problems, we think about what the right technology is for the job. Not everything's running on GPT3, but open AI is doing some amazing work, and we are very enthusiastic advocates for GPT3 as a practical tool in your toolbox. So GPT3, I feel like. Even people outside the technology world are starting to feel the ripple effect or the impact of large language models and GPT in particular.
So I think that a lot of folks will already know what I'm talking about, but what GPT3 is for those folks who haven't heard is what we call a large language model. So machine learning models are trained on data sets in order to solve specific problems. The problem set for GPT and the data that it's fed is incredibly broad.
So one of the ways that I like to explain it to folks who are hearing about it for the first time is like, what if you took the entire contents of the Library of Congress or all of the web, every single word that has ever been seen before, and then you trained a computer. To kind of make sense out of all those words and sentences and paragraphs in context. You assume that you've created this, and we call that a model. We call that a large language model. And then what you'd like to do is give some text as input or a prompt to that large language model and see what's likely to come next. So I think that most folks can have some intuition for the idea that like if you've read every book by Edgar Allen Poe, every Story by Edgar Allen Poe, and you see the first paragraph of a news.
You can imagine where it might go, and basically what GPT3 is and what any large language model is aspiring to do is to be effectively predictive of what might come next. Given all the knowledge that it's consumed, how does that work for you? Does that make sense?
Richie Cotton:It does make sense. Yeah. So it's about predicting. Bit of text given a text input, and so maybe you can talk a little bit about how Invisible Technologies is using GPT three in this context. Working with text.
Scott Downes: Sure. I'll take a step back and say a little bit about some of the. Business problems that we deal with. So like I said in my initial answer, like we're stubbornly horizontal, what does that really mean? It means that we try not to overly focus on any particular use case or on any particular vertical. So our process platform can be used by marketing departments or operations departments, or finance departments or hiring departments.
So we're not only targeting, say, corporate marketing and also. Only target one specific industry. So we don't only target the energy industry as an example. So when I say that we're stubbornly horizontal, that's what I mean. But as it turns out, the period of really explosive growth for us happened. During the early stages of Covid, where we started working with on demand delivery companies and the problems that they were having at that particular moment were just particularly well suited for our philosophy of how to solve problems with processes.
Specifically, we worked on large scale menu transcription cuz all of a sudden everyone in the. Still needs to eat, but going to a restaurant, not as viable as it was the prior Thursday, and all of a sudden you've got hundreds of thousands. Restaurant menus that need to be transformed and put into the catalog of options for various delivery companies.
If you think about some of the prevailing approaches for how those companies were solving those problems, well, of course, they had systems for every restaurant to go and upload information themselves. Very unreliable. And it turns out restaurants are not like data entry companies. It's not a natural skillset necessarily to import that kind of data or upload and export that data into proprietary systems.
And some of those companies look to solve that problem through engagement with BPOs or outsourcing companies. And what they would do is they would. Here's a bunch of raw input. Now let's have a bunch of people look at these and figure out how to turn that into a menu for your local pizza restaurant.
Notoriously complex , as it turns out, so many options. You're not just getting a burger with or without pickles. You might have thousands of different possibilities for what custom pie you want from a pizza shop. So you combine that sort of a problem space with an immediate need to massively scale up what we're largely human operat.
With some understanding that tech might help, like maybe we can use OCR R tools to transcribe the contents of menus. Maybe we can use scrapers to read websites and extract data and transform that in a way that can be uploaded to those systems. What we found, and again, it's an alignment with our philosophy, is that a combination.
Off the shelf tools and custom-built tools that do those sorts of problems well solved by machine. Should be orchestrated in concert with large scale human efforts. Cause as it turns out, sometimes you need a high judgment individual to discern that you don't typically serve pork rare, right? You might not have temperature options for every menu item.
That was a transformative moment for our company to be engaged. Everybody was elbow deep in this problem space. I was working on menus till midnight every night. Myself and everyone in the company really internalized the values that we were trying to present with our platform. This idea that machines aren't good enough by themselves and people aren't good enough by themselves, but there's some form of synthetic intelligence that happens through the right balance of humans and automation.
This is all a big setup for how we use GPT3. At that point, we found that we have a bit of a specialty around managing large, complex inventory problems, catalog problems, and I think everybody who's ever used DoorDash or Uber Eats or any of those applications has probably had the experience that I've had, which is I really want that particular menu item with the following options.
Let's. I want steak strips with my particular Mexican dish instead of ground beef. And I know that every time I've gone to that restaurant, all I have to do is say, Hey, can you give me the steak, not the ground beef? And of course they're happy to do it, but if you go and you look in the app, for some reason it's missing.
Or it's confusing or maybe two pieces. Oh wait, but it says ground beef and steak strips on the same option. Like how can that even be? Or even the standard option that you typically purchase is priced wrong or it's spelled wrong and you're not even sure what's going on, and those sorts of problems with managing the accuracy.
And proper classification of menu items and generally inventory overall has turned out to be a space that's like rampant. Turns out that that's not a problem that's unique to on demand delivery catalogs. It's a problem that appears everywhere with any digital system, and it's another reflection of this general picture of what do we do to create the right balance of fully automated systems and systems with human.
It does seem like at first glance, it's a really simple problem to try and transcribe menu items.
Richie Cotton: Think, okay, well I'll just put my data in a format, like some kind of api and it's sort of fine. But then actually thinking about the experience of going to like a restaurant website, and for some reason the menu's always kind of buried somewhere within that site in A P D F.
And it's a font you can't read, and you're like, well, how do I buy anything? So yeah, scaling that across millions of different restaurants to seem like really quite a, a difficult problem.
I mean, it's almost the exception when you have high quality in that space. So how does that lead to where we are today? Well, one of the things that we've found in recent work with GPT3 is that a lot of people think about GPT3 in terms of generative text problems. Like we use this too.
I'll just say transparently, candidly, like we actually use GPT3 to make blurbs and pieces of marketing material for our newsletters, and we're excited by the novelty and extreme relevance when you ask questions or set up the right prompts to get a, an interesting blurb from G PT three. It often leads to.
Like real deep questions like we ask might use chat GPT, which is a recent innovation to say, write a blurb for our company that reflects the following values and or write a blog post. That's when the interesting part happens, right? The engagement of humans with generative text. It's almost like some of us walk into a meeting without a plan.
Or an agenda and those meetings are inevitably ineffective or you spend a lot of time trying to establish ground rules. The meeting becomes a meeting about a meeting. Amazon, I think, famously has meeting protocols that require people to spend the early part of a meeting, reading, and then react to what was in the initial narrative that was shared.
I think that. The existence of generative texts for marketing use cases leads to higher quality conversations because you already have a framework to start from. That's not the example I wanted to give. That just happens to be one example. Gosh, Jasper's doing amazing things in that space. It's a great company.
Richie Cotton: Yeah, certainly I can identify with the idea that like having a machine, right marketing materials is great cause it's one of those things where it doesn't necessarily have to be innovative text. It is just sort of saying the right thing in the right tone of voice a lot of the time.
Scott Downes: One of the things that we found pretty cool, pretty exciting, is that there are specific problem spaces where given that framework of like managing complex data that requires human review goes through, transformations integrates into third party systems. In that general problem space, there are several things that tend to come up, like classification problems.
So it could be that broadly that you have options associated with steak or with a pizza that don't apply to fish, or specifically that you may have items in your catalog that have been misclassified. So you could imagine a situation where I'm looking to buy a laptop battery replacement, and I go and I look at batteries and I see AA batteries and D cell batteries and all kinds of batteries that don't seem to relate to computer batteries.
And then I go and I look in the different part of the hierarchy of the taxonomy and I go, oh, I had to start a computer. Before I could get to batteries that are relevant to me, batteries at the top level wasn't the batteries I was looking for, and that's just a navigation problem. Imagine what happens when things are wrongly classified.
So even if I knew to go to the computer space, I might find. AA batteries in the computer battery space, and I might find my laptop replacement battery over there with something entirely wrong. So just broadly, I would describe those as classification problems for inventory management or catalog management.
So we were doing some recent testing for one of those use cases where we had a classification problem, and we started with the assumption that we might have to work with a specially trained model. To address this particular classification issue, but we actually found that like not even a fine-tuned model, and I'm sure we can talk about that a bit more if we need to, but just plain old stock GPT3 was able to help us meaningfully to solve classification problems.
And there was a specific example we looked at where GPT3 beat our human testers at identifying a particular it. I believe it was like a women's makeup, like distinguishing between eyeliner and mascara, which I don't think I know the difference. , now I'm embarrassed, but I need to go look it up.
Richie Cotton: So subject for a different podcast, perhaps Islander versus MAs.
Scott Downes: Maybe my makeup expertise is, is poor. But yeah, that was an example of a situation where when you think about the problem space that I've posed of, how do we have accurately classified items in a taxonomy, whether that be for an accurate catalog. Or for an accurate diagnostic assessment for a radiologist Classification.
Problems are a significant problem in tech, and we are finding that open AI's amazing work in this space has made the cost and barrier to entry for solving some of those problems much lower. And that's just one example. Don't wanna brag too much on ourselves. I do want to say like, I think Notion is an example of a company that has released some AI features in their platform that address some of the same issues that we see in that kind of inventory space as well.
Like you might want to clean up misspellings, you might want to clean up grammar. And we've lived in a world for a long time where you think that you might need dedicated tools for that. You know? And I think that Grammarly still exists, right? Like I think there are companies that exist to target spellcheck and grammar, but increasingly those sorts of tools become table stakes in mature platforms because of the capabilities provided by like open AI's API for GPT3.
So if it's super simple for me to say, Here's an input. Can you clean up the grammar? And it does, and it's effective. Then every product gets better. And those kind of things like for those who haven't seen what notion is doing in that space, they have some just really nice little generative text thing, a little grammar and spell check thing.
Like those kind of immediately super practical things kind of run at odds with what you hear sometimes in the media, which is more obsessed with robots taking over the universe or Skynet or something like the reality is. The AI tech that's coming out every week these days is really impressive at solving simple, practical problems.
And if you can solve small, simple, practical problems in a scalable process, that sound that you hear is a cash register, right? , you're saving money on every single run. And if you're trying to process 10,000 menus or update a hundred thousand inventory items, Then the ability to take a little bit of the load off is 20%, if it's 80%.
These are totally real world viable scenarios for us, where we find that problems that were used to require humans now can be solved to a reasonable level of accuracy, not a hundred percent to a reasonable level of accuracy with commonly available, not a lot of customization required off-the-shelf models with human in the loop review.
Well, that's like a dream come true for a company like us because it just validates the thesis that technology is best when it's invisible. Humans are a key part of every meaningful bit of work, and for us to put together AI that's going into spaces that we never could imagine it going in before with well-trained, highly capable human agents.
Richie Cotton: It's okay, so there's a lot to unpack there, but it seems like the central message is just that if you can make these writing tasks a little bit easier, then it's gonna help you scale. And so the other businesses who are kind of interested in making use of GPT3 or other text generation tools, where do you think is a good place to get started?
Scott Downes: Well, I think Chad, the GPT playground, apparently if you look on Twitter, everybody's gone crazy over it in the last few weeks. I even think about like how when you and I first spoke and the time when we're recording a podcast and the time that this is published, There's so much innovation in this space that the story will change materially in each of those windows.
If this is published in a month, I fear that people will be going, oh, I already know about chat gp, of course, but Twitter is blowing up right now with examples of people seeing these kind of transformative interactions they can have by. Effectively, it feels like a conversation where you maintain context.
Instead of it being a single prompt with a single response, you send a message, get a message in response, you send another message. So I just would encourage people, Google Chat, GPT, and go have some fun. I think that if you asked me for an example how GPT is changing, The day-to-day lives of software engineers, including engineers on my team.
If you asked me a few months ago, I would talk about Codex and co-pilot and GitHub and how we're now at a place where engineers on my team are already using those tools to figure out how to do tasks that are, you know, machines are good at remembering minutia and people aren't always, and I would rather have engineers who are thinking about architecture and business problems and how to solve those rather.
Remembering the exact syntax to call, particularly idiosyncratic API that maybe doesn't look exactly the way that other APIs look, or that there might be some nuance to how to invoke these other frameworks or libraries through tools like copilot, GPT Powered. You could a few months ago say, I need to write a meth, does the following, or I need to write a function that does the following and have it prepopulated.
Well, the technology has moved so quickly. That now you're seeing people have those sorts of conversations with chat, GPT, that have context and maintain it across questions. So there's an example I'm gonna bring up , like with Elon pushing so hard on the Twitter culture and executing a bunch of layoffs and kind of being a, a harsh taskmaster in that space.
Some joking tweets popped up with one person providing an example of how they would respond. If they had to produce a document for Elon Musk describing what they've worked on this week, and they asked a question like, chat GPT I work at Twitter and I would like to come up with 10 good ideas that might be worth exploring, that would show that I'm providing value as a software engineer at this company.
And it responds with 10 numbered and relevant results. Oh, cool. That's amazing. Well, let's take number. Can you go ahead and generate a document for me that describes how I would use that and includes some statistics? Okay, now generate some sample code and then after the sample code was generated, you know what?
That sample code looks too simple. Can you add some enums and some extra variables? And then it does that, literally does that, right? So. That kind of interaction wasn't possible. At least it wasn't broadly available a month ago. I, I know some people, it's very easy to get attached to this narrative of like, fear about jobs being lost, but historically, you know, I, I don't wanna go too far out on a limb here, but I'll say that seems like the Industrial Revolution did some good stuff for us.
And the ability to connect through the internet has done some good things for us. And I think that there's always a little bit of twinge of fear associated with rapid technological advancement, but there also is an excitement and they're really kind of the same feeling. It's just how you interpret it.
So if you feel your heart race a little bit when you think about these kind of things coming into existence, well good. You're paying a. It is amazing and the implications are complex, but what I personally find is that by having at our philosophical center as a company, and even me as a person, that knowing that these are tools that enable humans to do amazing things, that's what motivates me and excites me.
At the end of the day, we all have Iron Man suits. We're all iron.
Richie Cotton: Absolutely. It just seemed like it's a really sort of transformative time and generative AI has really sort of come of age, maybe in the last month. I suppose in terms of using it, you've talked about a few of these use cases around like automated code. We talked earlier about marketing, things like that.
I suppose two of the things on a lot of business minds at the moment are things around productivity increases and cost savings. So are there any specific use cases that you think are gonna be important in those areas?
Scott Downes: It's hard to say. I know that the ones that I see that are particularly powerful relate to some of the examples that I already described of managing complex data sets that require human review. And that's a very broad and general way of saying something that applies to almost every company. So clearly there are a lot of e-commerce use cases, and clearly there are use cases that relate to supply chain in general.
I think that one of the things that I've found exciting in the ML space, Really from like my first experience with doing bioinformatics use cases, I really think that there's a lot of opportunity for us to remove inefficiencies and reduce cost in healthcare. And it's an area that's of enough interest for us as a company that we're very mindful of hipaa.
And we do work with some clients that have sensitivities related to that. I think that. One of the more compelling use cases that I saw recently was related to increased accuracy on classification of medical scans, and I spoke to a man who's running a company who one of their primary focuses is on doing analysis of medical scans where there's an AI component.
And a human review component, and the outcomes are fairly measurable and the classification problems are pretty complex. So let me unpack that. A lot of folks may still have a mental frame of classification problems being really simple, like hotdog or not hotdog, the Silicon Valley example, or color identification.
Or if you've ever done one of those captions where it's like, pick the horse that's smiling or whatever, those kind of classification problems seem fairly simple and trivial. And it's great that , like, we're getting to a place where you're providing training data to make sure that those things are more accurate over time by engaging with the caption.
But I think that the possibilities associated with more accurate medical testing are lower costs and saved lives. And that's something that I think it's useful for me to remind myself that cost savings in certain spaces, like it's not just to increase wealth, it's actually to enable opportunity and to extend lives.
So when. Heard this example, and I walked through it in great detail with the c e o of this company that they were seeing, no offense to radiologists out there, but they were able to demonstrably produce more accurate results from analyzing scans through a combination of complex classification with hundreds of potential outcomes, and then human review with humans.
And I know we're supposed to be talking about technology here, but the humans piece is, is just amazing to me. If you think about the work that the work associated with human in the loop processes, Types of people. They're a different breed. They're very smart, very retrainable, very capable people whose job is to pay a lot of attention, very detail-oriented.
They're kind of like my brethren in the engineering space. They're a different breed. They think differently. They're able to focus on a skillset that still surpasses even the greatest capabilities of, of AI models. They're the ultimate arbiter of whether this scan means this or this scan means that. And what I find really fascinating about that is that the psychographic profile, the way that those people operate is more important than their existing training.
So the real heroes in this particular kind of scanning use case, it's not the ML scientists or the engineers who built the model. God bless 'em. You know, they did something really important, but the people who are performing that final review are intelligent, high judgment, high impact individuals who are saving lives.
And they're able to do it at a level of oversight and quality. That's hard for a radiologist to do, cause that radiologist has a lot of jobs. He's having to deal with the insurance company or bedside manner or praxis like running the scans, whereas, This person who sits at a machine, looks at the data, looks at it very carefully.
Those people are close to my heart and we have thousands of them. In our company, we call them agents. I think sometimes people get confused when we say agent, they think they mean a, a machine technology. But we have thousands of agents all around the world who log into our platform, invisibles platform every day, and they solve complex problems. They're very retrainable, malleable, intelligent people.
They're the secret sauce, the technologies. We just kind of take commoditized tech off the shelf and orchestrate it in the right way. You gotta have the right people to make sure that we're making the right decisions.
Richie Cotton: That's really interesting and exists. The healthcare is, is really is a sort of life or death decision. My intuition would've been that the radiologist or the doctor or whoever, because. Of the most senior person is gonna be like the real expert in interpreting these medical images or test results, but actually having someone who's really just dedicated to that one task turns out, even if they haven't got like years and years of university experience and sort of medical training, it's like just doing that one task is gonna make them more efficient at that job.
So one thing that. Cropped up a few times. Here is the idea that you do the machine side first. So you've got AI first, and then it's a human reviewing what the machine's done. Is it always that way round or do you ever have human first, then AI as the review step?
Scott Downes: It's more often for us in our business that it's human first, but let me walk through that a little bit. One of the key value props of our platform and our business from the start, I think this is just reflects the entrepreneurial ethos. We've always been very excited to tell our clients we can deliver results within the first 48 hours.
And we know we have really bright people. I mean, we have a platform that enables those people to maximize their talents. So it's a very common pattern for us in alignment with our business model to show early impact. So typically what happens is that a company comes to us with a problem and they say, this is how we're solving it right now.
This is the frame in which we're looking at this, and that problem probably being solved by a combination. Overworked employees of, of a particular company and some third party system they bought, maybe. And what we do is we first start with the idea that like, we're gonna take your process. If, if it's well documented, we can encode it into our system very quickly, like in an afternoon.
And at that point, what we've done is we've taken what warts and all. What your approach is to solving a problem, and we're putting it into a system that enables it to scale. So if you can imagine like a, a magical shrinking ray or a exploding ray, like shine our ray on, on your process and make it scale up to run at orders of magnitude larger, but obviously that's not the real path to efficiency.
What we want to do is apply our expertise so our more typical. Is to start with, we'll just kind of like airlift your process out of human beings, encode it into our platform, execute it with our agents, and then find opportunities to optimize. So by having a process canvas in front of you where you can literally see every block of work that's being performed, the time that it takes to handle each of those requests, the cost associated with it, it enables us to make very rational decisions about what's in the best interest of our clients.
There's a key factor. I told you I'm enthusiastic about code. I'm enthusiastic about design, but I'm also enthusiastic about business models and I think that one of our secret weapons and invisible is clearly that we charge based on results. We charge based on outcomes. So a company that whose job it is to bring in an army of people and whose paid by hours, like person hours, is only gonna be motivated to increase that number.
It's the contractor problem. If I have someone come out to my house and I'm paying them, you know, $20 an hour to fix my deck, they're not gonna finish early , they're gonna take every minute. It's just in their self-interest. But for us, because we are charging based on an agreed upon rate for the output, we are always incentivized to optimize.
Incentive to optimize usually comes in the form of, uh, kind of a second wave of automation after we've encoded your process. So another thing that's implicit there is that we aren't a set and for GA company, we're a relationship company. And the term that we use sometimes is work sharing. So when we take on work for a client, you don't think of it as outsourcing some task to some room full of people.
What we're doing is we're sharing the. We're gonna mature it over time. We're gonna enable it to scale to levels that you were having trouble achieving on your own. And while we're doing that, we're gonna find optimizations that are gonna reduce the cost, cuz that's good for us. And we have a principle of deflationary pricing, which is that, you know, some people call that volume pricing, but for us, conceptually what we're doing is we're not just offering a discount on good faith or reducing our margins.
What we're doing is if you're invested in us, then we're invested. And we will find optimizations that reduce the cost for ourselves and for our clients. So how does that relate to kind of the bigger picture of automation? First, there are certainly situations where we go to a client, we understand that their pain point is they have a largely automated process that needs help.
They will airlift that out too, . We can take that and run a largely automated process from day one as well. But typical. The folks who come to us are having problems because they're innovative companies that are growing rapidly and they've hit a wall with the approach that they're taking, which usually is a wall that involves a decision of what tech should be engaged.
So take your process, add the right tech.
Richie Cotton: Okay. We've talked a lot about different use cases around the business. One area that we've sort of quickly not touched on is how these technologies affect data teams. So how does this, obviously like data teams are usually familiar with ai, but how they make use of like other AI tools or this combination of AI plus humans together.
Scott Downes: The most impactful way that our data teams can engage with the company in driving its success with our own company are in understanding what the right moves are to drive the right level of efficiency and quality. So we wanna make our clients happy and we wanna lower costs. So when we evaluate. Tools and we evaluate a lot of tools I've been sharing with you we're like kids in candy stores when it comes to all the different technological advancements, whether they be in AI or with open AI specifically, or with other companies who are doing great work out there.
The key thing for us is to figure out. How do you onboard, internalize the right approaches in a way that moves us out of science experiment and into business outcomes that are favorable for the company? So as an example, like how do you decide to use GPT3? How do you decide to use any particular technology, especially new technology, where it's, there's some element of risk, you're sticking your neck out and you've got a client who's dependent on you making the right decisions.
So we try to be data driven and all of our decisions, I think that's everyone's aspiration. The details of how you achieve that are. Sometimes complicated. So we've established a model, a common process practice for how we do things that typically revolves around using notebooks or Google CoLab notebooks that allow us to answer business questions with literal technical integrations embedded.
So as an example, because. Open AI and GPT3 are API accessible based on gen general principles from the start, right? The idea is that there's a simple engagement that we can embed in a live notebook and we can work through business problems and do tests and trials together, and the same way we would decide to prioritize a specific feature in our platform based on.
Previous data for how we've experienced pain points. So if it's like, okay, we find that we're spending too much time in this stage cause of X, Y, and Z, cause of poor training, our hypothesis is we add training to this number of people. We see the outputs and we measure them. It's the same thing here. And the key though is to.
Kind of pull AI ml work out of the lab and into the factory floor. You have to have a clear way to productionize that. I'm still hearing sometimes these scare numbers of like, 50% of trained models never go into production. And generally it to me, it's not a different problem than has ever existed before in the technology space.
It used to be there were the same scary numbers about all IT projects in large corporations. Oh, half of them fail and they're all late. In reality, what we need is we need to have an agreed upon set of measurable factors that make a difference to the business that we're all aligned on. Like, I mean, let's increase our gross margin, let's increase our revenue.
When you look at those specific problems and you are factual in your approach, and you think about what the literal impact will be, and you can measure it and observe it in the form of processes, and you can ab test it against human results, and you have a lot higher confidence in deploying those. So for me, the question is how do we make smart decisions with technology?
And the answer as it relates to AI is you try to do the same things you do elsewhere, which is like, make sure that there's a business case, understand the cost, understand the impact, and if the cost benefit analysis is on your side, you go for it. You just make the right decision.
Richie Cotton: That seems a pretty sensible approaches. Like, do the same as what you do elsewhere. See if this is gonna actually benefit you. Try it and if it doesn't work, I guess, move on or do something.
Scott Downes: You'd be surprised that w many people don't think of it that well. AI's really cool. I need to sprinkle some magic pixie dust on my platform. Let me go use this just because
Richie Cotton: It would be amazing if that worked.
Scott Downes: I wish.
Richie Cotton: All right. So I know making predictions is a bit of a mugs game, but GPT3 we've sort of established pretty much a game changer, say with church GPT. Now there are sort of rumors that GPT fours coming out sometime in 2023, so we're sort of at a tipping point of having very useful generative ai.
So can you talk about like what you think your predictions are for the effect of this on businesses, this sort of ever-increasing power of generative?
Scott Downes: Well, I'm gonna make some boring predictions, not because I'm as scared to stick my neck out, but because I think that there are some things that are pretty straightforward. I do believe that there have been some people who've said that the internet hasn't made a massive difference on actual, like worked hours.
Or productivity even. And what I generally tend to believe is that whatever the field, the hype bubble pops and the concept that like overnight, we're gonna have autonomous 18 wheelers, all of our interstate highways, and that every truck driver in the country will be unemployed. Like those kind of predictions don't tend to come true.
And sadly, like the productivity enhancements sometimes don't come. So I'll, my modest prediction is simply that there will be widespread adoption of AI technologies and it will become normalized. So what that means is that people will find practical ways to improve their margins by 10% here and there, and that they'll find that it's not so scary after all, and they'll forget why we.
Talking so excitedly about it in the first place. My other prediction, which is much more idealistic and hopeful, and it's probably going to be wrong, , is that at some point in the near future, what the advancement of technologies in this space is gonna start to actually effect in the real world implementation of people's jobs, is that their lives will be better and that they'll be doing less grunt work and be doing more high value work, and that they'll be happier in their jobs.
And they'll spend more time with their families and they'll feel more of a sense of peace and wellbeing. I realize that's wildly idealistic and optimistic to say that I have to say that's not what the technology revolution has brought, and some of us are ready for it. You know, we believe that it's about time for the fact that technology exists to help us to solve problems and do the work that we used to do or spend a bunch of time doing.
It's about time to take a few of those hours back for our.
Richie Cotton: Absolutely shorter working week and a happy life. is is all you could wish for. Really.
Scott Downes: I truly believe.
Richie Cotton: Super. That is a little bit general though, so I'm not quite letting you off the hook . So just, is there anything you think that, like doesn't quite work yet, but it will do in the near future? Are there any things that are sort of, that you see are just around the.
Scott Downes: Actually, I think. Generative images and text are somewhat at the level of technology demo. And one of the things that I find exciting and interesting is I'll just take a few examples that are happening around me right now. So we have a PR person, Andrew works on our team who's using GPT3 to generate text.
And what he's doing by doing that is modeling a sort of behavior that will. Change the future career path of other people in that role. And they're gonna see their identity and their career development in a different way. And we have a designer on the team who's not afraid of Dolly, who's not talking about how it's not real art or being defensive and scared, but seeing it as a way to make it easier for him to iterate through ideas and have a scratch.
And I think that right now those sorts of solutions are part of a creative brainstorming workflow. But I do think that they will become more part of a production workflow. So right now we, in our internal tools, there's some places where there's imagery and artwork. And in the past those were generated through a designer sitting down, talking with the team, maybe coming up, maybe pulling some stock photos, you know, the standard like group of people around a conference table all pointing at something.
And what, where we are now is that we're using generative tech like Dolly or stable diffusion to create images that, that then we react to and adjust and touch up. And I do think that we'll get to a place where there is an understanding. I don't really like the term prompt engineering, but an understanding of the way that we interact with these powerful models to get the outcomes that we want.
I know that sounds super vague and you're not gonna let me off the hook, so I'll be real specific. One of the things that I think is really exciting about. Generating images from text is developing text style guides that produce predictable result sets. So as an example for us, we have started to iterate on a standard way of how we communicate in terms of constructing prompts, in order to deliver outputs that are in alignment with our style.
I saw artistic example of that in New York last week. I saw an art opening where there was a piece of work, three artworks, three images that were created from different sections of a poem by William Blake. And I found that tremendously inspiring to think about how poetry in William Blake's words creates these visual images that felt very blank.
And when I think about what we can do in the generative image space as design, It's more about honing your communication and your language rather than just honing the way that you move a mouse or a paintbrush. And I think that what we will have, and I hope this is a concrete enough prediction, is that we will create more.
Andrews and Noahs, and folks like we have that I'm seeing grow on our team, that there'll be a whole class of people who have internalized the use of these tools and become experts in them. They'll be virtuosos at generative art, generative design, generative text, and they will be able to do way more than old people like me can do because they won't be stuck with the old program.
Richie Cotton:That does sound pretty amazing and it would be great to have like enough people like Widespreaded option of these tools that, that there aren't enough people that can use them well. But I really liked that point you made about having a style guide for prompts in order to be able to have re. Doable like images, and now I'm thinking, well, maybe you just need to get chat GPTto generate your style guide and then feed that into stable diffusion
That could be fun. Alright, so just to finish up, do you have any final advice for people wanting to adopt generative ai?
Scott Downes: I think don't be scared. Start playing. The excitement of new technology is something to not to be feared, like come with a beginner's mind. What are the cool things we can do with this? Come with a playful mind and think about all the possibilities. I think it's very easy to get caught up in prevailing narratives or discussions of AGI or like there's not an artificial intelligence that you're communicating with.
It's just you with a cool. And have fun and play with it and don't be scared of it cuz it's probably gonna help your business. It's probably gonna help your life.
Richie Cotton: Alright, brilliant. Thank you very much and thank you for your time.
Scott Downes: Thanks. Great to be.
Announcing the "Become an AI Developer" Code-Along Series
ChatGPT & Generative AI: The Year in Review – Top 17 Moments
A Guide to Docker Certification: Exploring The Docker Certified Associate (DCA) Exam
A Comprehensive Introduction to Anomaly Detection