AI in B2B: Part 3

Listen now on your favorite streaming service!

presented by KeyMark

AI in B2B: Part 3

Key Takeaways: 

  • Humans will still be necessary to triple-validate AI outputs and avoid bad data.
  • AI is still seeking maturity. Be cautious of the hype train.
  • End users are the best litmus test for the needs of an AI implementation.
  • The smartest thing an end user can do is ensure they’ve got a trusted advisor to help navigate problems and determine where and when to apply AI.

Summary

The following is a transcription from Episode 30 of The Orange Chair Podcast, “AI in the B2B Space: Part 3”.

In this episode, Anthony and Jim return with KeyMark COO Cameron Boland to discuss AI for end users, the ROI of intelligent document processing, and more.

Listen to the full episode or any other episode by selecting your preferred podcast listening experience on The Orange Chair Podcast page.

Transcription

Caroline Ramackers (16s): Hello, everybody, and welcome back to the Orange Chair Podcast. Today’s episode is part 3 of our AI in the B2B Space discussion. Join Jim Wanner, CEO of KeyMark, Cameron Bolland, VP of Operations at KeyMark, and automation industry expert, Anthony Macciola.

As they discuss balancing AI in your operational business model, the ROI of AI, the end-user’s perspective, case studies, and much more. Now, without further adieu, let’s jump in.

Jim Wanner (44s): Hey, everybody. My name is Jim Wanner, and I’m here on the Orange Chair Podcast. I’m thrilled to be with two of my very good friends, Cameron Boland from KeyMark and Anthony Macciola from Haystac, and what we’re gonna do is we’re gonna walk you through a little bit about what’s happening in the AI world.

And this is kind of a follow-up to the discussions that Anthony had last month. And basically, what we’re gonna do is walk you through a couple of different things. By the end of this, we hope we educate you enough that you feel a little bit more comfortable about how AI can be deployed within your business environment successfully. And basically, the end result is that it brings up your enterprise value and or makes you a better organization to deal in with your constituents.

So, with that, I’d like to give everybody the opportunity to introduce themselves. So, Cameron Boland, please take the…

Cameron Boland (1m 34s): Hi. My name is Cameron Boland, and I’m vice president of operations here at KeyMark. And as Jim said, we’ve been colleagues and good friends for a long time. I’ve been here for a little over 22 years. Very excited to be talking with you today.

Jim Wanner (1m 46s): And, Anthony, if you don’t mind chiming in here, that would be fantastic. 

Anthony Macciola (1m 50s): Yeah. Thanks, Jim. Anthony Macciola, I’m a co-founder at Haystac, and some of you have may know me from my past gigs as CTO at Kofax, and my last gig was as the CIO at ABBYY. Been in the capture space for just a little while. 

Jim Wanner (2m 12s): So if you don’t mind, there’s a lot going on in the AI world today. Some of this relates directly to some of the historical things that we’ve done in the past, and I thought what we could do is let Anthony kick-it off by just giving us a quick overview as to some of the requirements to run some of these different AI components and make sure that they’re successful within just an operational business model that we’re dealing with today.

So, Anthony, if you don’t mind, could you kick us off with that topic? 

AI Requirements in B2B

Anthony Macciola (2m 40s): Yeah. Absolutely. And it’s one that most people don’t think of, right? At least at the beginning, it’s one of the ones that creeps up and cam bight you hard when you get closer to deployment.

So everyone’s seen and heard about generative AI and all the advancements and the cool things you could do with large language models. Right? But at the same time, you know, being in the capture space for as long as we have, right? We all realize and we know the likelihood of many of our customers popping sensitive data into the cloud is probably not going to happen as fast as everyone in the AI space would like.

So, the net result of that is you having to deploy in an controlled environment in either an AWS-controlled environment or Azure or even on-premise. Right? And the second you do that, you’re now faced with coming to grips with the type of hardware and infrastructure that you need for these kinds of AI advancements to operate at a reasonable clip, depending on your volumes and your latency requirements. So the days of, you know, renting or leasing or whatever they call it subscribing to a CPU bound or base server in the AI space are gone.

You’re talking about leasing or subscribing to GPU-enabled machines. And many of the models that people are playing around with are containerized models that you can use in a controlled environment, I won’t mention ones per se, but there’s that market. It’s just exploding now. Right? Those models are not small. You know, 30, 40, 60, 80 gigabytes.

So the kind of memory that you need to load a model, the kind of GPUs that you need to process those models at regular speeds, are not trivial.

So I say all that to say this. Everyone seems enamored with large language models today. And they serve a purpose, and they can do amazing things, but you want to be selective when you’re leveraging them using large language models at production runtime will bring with it a material infrastructure cost component, and you need to be cognizant and aware of that.

As you’re modeling the ROI out for these solutions, there are supporting or complimentary technologies, all sorts of different transformer technologies that at runtime, can produce very compelling results, and have a fraction of the hardware footprint required to do that. So as you’re formulating your AI strategy, you gotta take into account the commercial infrastructure dependencies and costs.

And, you know, my suggestion is you get a handle on that really early on because if you make design or deployment decisions, not taking that into account, you could have some regrets down the road.

Jim Wanner (6m 23s): Well, that’s that’s some good information right there, Anthony. I appreciate you coming into the next word that I’m thinking of, and kind of leads us to our next topic, which would be return on investment.

I know that some of the hardware that we’re talking about here can be rather expensive, and ultimately, I think that in general, most organizations are now either buying for compliance reasons, or they’re looking at true return on investments. So that’ll lead me to a great question with Cameron, and when organizations are looking at the ROI with this type of technology.

You’ve been around a long time, you’ve seen a lot of these deployments and said instead of just looking at it from, okay, we’re gonna throw it out there. They’re gonna look at it, is it gonna provide me with a return on, and is that return on investment gonna be such maybe 6-months, a year, two years, whatever it might be? How should an organization go about looking at a true return on investment with AI Technology to ensure that they get it within the time frame of their financial services requirements?

The True ROI of AI

Cameron Boland (7m 21s): Well, I think the first key to this whole process is actually looking at an ROI. Too many businesses get kinda caught up in what Gartner refers to as the hype cycle with AI or any other emerging technology. That there’s this really cool new thing out there, and I gotta apply it to something. Anything. I gotta have AI in my business. And that’s really not a good way of looking at things because there’s plenty of stuff today I could do, but there’s not a lot, not as many things that it should do at this stage of its maturity.

So you really do wanna look at very specific business processes, which are gonna return dollars to the business and return dollars to the bottom line, or it’s not worth doing.

You know, if you’re not looking at something which is a process that you’re handling 1000 or tens of 1000 of transactions a month, and you’re gaining incrementally pennies or dollars on each of those transactions by automating it. You know, if you’re, if you’re not looking at that, then you’re really not ever gonna come up with a good measurable return on your investment.

So taking a look at very business-focused or process-focused AI implementations, I think, is really the way to go. You’ve got to start with something that has a defined process that is gonna allow you to return dollars to the business and pay for the project.

And then start reaping benefits down the line within probably 8 to 12 months, or it’s really not worth doing. The pace of change is just too great. And if you try to boil the ocean, you can end up really with nothing at the end of the day.

Jim Wanner (9m 2s): That’s a great point. And I think that Anthony, you and I have chatted recently about how we might be able to deploy these solutions within a shorter time frame than we did in the past to produce a better ROI. Do you still feel that that’s where things are heading today, or do you think that we might be wishing for something that’s not available to us? 

Anthony Macciola (9m 23s): Yeah. I think, when we look, when 2024 is behind us, and we’re looking back, I think we’ll be able to characterize 2024 as a major year of disruption in the intelligent document processing space, the IDP space. 

I think the one thing the selective kind of appropriate use of AI is going to do is just collapse the upfront setup and configuration and ongoing maintenance. When you think of a typical IDP deployment today, and just think of the sales cycle, right? Let’s do a POC. We’ll get us samples, and that POC is gonna take a couple of weeks or a couple of months. And at the end of the POC, we’ll have a kind of a good idea on whether or not we can solve the problem or not, or how well we can solve the problem, right? Buyers want quicker time to value and lower total cost to ownership, right? AI is gonna be able to demonstrate its ability to solve these problems by essentially eliminating the upfront training configuration scripting kind of processes.

So I think that in itself will have a massive impact on ROI time to value, right? It should shorten sales cycles, which is good for everyone all around, right? Customers and vendors and providers. So I actually think that’ll be the single biggest material impact of AI in the IDP space. Is the setup in ongoing maintenance associated with deployments.

Jim Wanner (11m 12s): Sounds almost too good to be true. 

Anthony Macciola (11m 15s): It does when I say that, and a lot of people are wondering what I’m smoking. But, it’s a reality that’s going to become more and more obvious and apparent as we move into 2024 

Jim Wanner (11m 32s): Cameron, in your role, you’ve seen a lot of these machine-learning solutions deployed over the last 15 to 18 years. Some of those solutions have taken effort from the customer to actually gather the documents and to make sure that they had the right documents set up and even identify the correct documents where organizations might be arguing over what the document type is and it takes too long to gather them. It takes too long to push them out. It takes too long to really set these up. If Anthony’s correct, and basically, we can deploy these solutions faster, better, cheaper, what’s the end result of what a deployment could look like compared to what seen over the last 15 years? How’s this gonna look different? 

Deployment Time Today vs. Tomorrow

Cameron Boland (12m 18s): Well, you touched on what’s gonna be the biggest impact, and that’s gonna be really on the business. It’s not so much even on the implementer. I do have a good example of a customer who was doing mailroom automation, and this was about 15 years ago, and they were using some early machine language, machine learning-type technology. What I’ve heard referred to as good old-fashioned AI, more recently.

Jim Wanner (12m 44s): I’ve never heard of that term, but I love it.

Cameron Boland (12m 45s): But, yes, the specific example was the sheer volume of samples that had to be gathered to be able to train the system to handle all of the different types of incoming mail was enormous, to the tune of 1000 hours on the customer side. That’s not the integrator or the implementer’s time, that’s just people taking out of the business, subject matter experts who have to look through all of these documents, figure out what’s what. And, of course, the problem is when you have a sample requirement that’s that large, it takes multiple people. And guess what? Multiple people don’t always agree on what a document even is. So all of that has to be sorted out. And the end result was we ended up with a successful project, but it was hard. It was a very difficult project to implement. And if I recall correctly, it was well over a year to actually get it into production.

Now, they’ve used it for 15 years, so I’m sure they’ve made plenty of, return on that investment. But fast forward to today, when I don’t have to have that samples requirement, we might have been in production in a matter of a couple of months, and that includes setting up the infrastructure and testing and validating.

And that allows you to get that ROI advanced way into the current year. Furthermore, that implies that as your business changes and your document stream changes over time, which it will, there’s gonna be less work that is needed to be done in order to keep your model, your AI implementation up to date with what your business actually is.

You can change with the times when you have a tool that’s not overly burdensome. To update.

Jim Wanner (14m 27s): So in other words, from your perspective, do you feel like the administrative resources required for a customer to deploy AI is going to be less or more? 

Cameron Boland (14m 37s): I think it’ll be way less time. 

Jim Wanner (14m 39s): Anthony, would you concur with that? 

Anthony Macciola (14m 41s): Yeah. Absolutely. 

Jim Wanner (14m 44s): So the end result of this is we could deploy solutions in shorter time frame, customers get an almost immediate return on their, and they can maintain these as their organizations change and grow and prosper. 

Cameron Boland (14m 55s): Absolutely. I think that’s really the differentiating change that we’re seeing in the industry today. 

Jim Wanner (15m 2s): Wow. That’s fantastic. 

Cameron Boland (15m 3s): That’s the difference between what we call AI today and what used to be AI 15 years ago. Good old-fashioned…

Jim Wanner (15m 8s): Good old-fashioned AI, I guess. Anthony, did you have any use cases you might be able to share with us? 

Anthony Macciola (15m 15s): Oh, let’s let’s pick one. We were just looking at one the other day. It was a health care document. They’re 30, 40 pages long, and it was laced, it was a benefits document, and it was laced with tables some pages had two or three tables. Some pages had one table. Some tables were side by side. Different headers, different footers. And, I was, with a partner who was trying to use good old AI to do that. And, after about, I don’t know how many hundreds of hours trying to do it, they kind of gave up and, I saw an application of a model that has been trained to identify and parse tables dynamically on the fly, go through the 35-page document in about 30 seconds and, identify every table parse it out in JSON format with a level of accuracy that was just mind numbing.

So something that was hundreds of hours of scripting and natural language processing, and keyword searching, got collapsed down to about 30 seconds. Not everyone should expect those kind of results, in truth of lending. But it kinda does show you. The other example I’m thinking of, Jim, is document separation in the IDP space today.

One of the things that is the most challenging to date is, and I’ll use the analogy that everyone should probably have experienced a loan packet. Right? Yes.

You hit a PDF with a hundred pages in it. It represents a loan closure, and it may have 50 different documents in it. Different lengths, different types. Where do they start? Where do they end? People used to use separator pages, umbarcodes, fixed page cam, all sorts of creative ways of trying to separate them, right? And I’ve seen models deploy that read the document like a human would and determine continuity breaks, and can do real-time page separation with zero setup and training.

Jim Wanner (17m 48s): Well, if you can do separation, that’s the holy grail of this of this industry.

Anthony Macciola (17m 53s): It is. So, you know, understanding, you know, there are… AI is cool just in general. Right? How do you monetize it, and how do you use to solve real-world problems that deliver business benefits? I think that’s where all those boils down to. Right? And I think for a lot of organizations going in and looking where they’re spending a lot of labor today, whether it’s on setup, whether it’s on separation, whether it’s on document verification or quality control, those specific areas are areas where if you use it thoughtfully, using AI can make a big impact in reducing human touch and labor.

Jim Wanner (18m 37s): That’s great. And now that we’re back to the human part of this, it was kinda fascinating. Three years ago, I read a report saying that 31,000,000 people were gonna lose their jobs.

But the interesting part of it is that in the same report, it also said that 38,000,000 new jobs are gonna be created by this revolution. Right? So if you look at what’s happening with this overall space, the part that people just don’t seem to grasp is that you’re still gonna have to have some kind of human component associated with artificial intelligence.

The concept of having hallucinations has always been a risk point with any machine learning or AI tool. So how is the human in the loop gonna really be a part of these solutions? And is it gonna be significant for every deployment that we run into? Cameron, do you mind getting us started? 

AI + Humans

Cameron Boland (19m 29s): Well, I think it absolutely is gonna be significant because you never want a situation where bad data is getting into your system of record. So there are things that can be returned from an AI-type of implementation where you have things like confidence on how well it has taken word off a piece of paper, things of that nature.

So you can start to set in safeguards where a person is gonna look at those things that are not confident or look at those things that are extremely critical to the business process to validate that AI is getting you the right results. Again, it’s all because you don’t wanna get bad data farther down the line. So somebody’s got to take a look at the things that are really critical or the things that are not confident.

So I think that we’re gonna see that successful implementations will leverage that from the outset. That’s not gonna be added as an afterthought. That’s gonna be built into from the beginning. 

Jim Wanner (20m 28s): That’s fantastic. So, Anthony, what are your thoughts on this one, on this topic? How do we keep humans a part of an AI solution? 

Anthony Macciola (20m 38s): Well, anyone who’s near the capture market for a while, right, they know the one question that always gets asked. What happens when it doesn’t work? And that assumes that you have the ability to determine that it hasn’t worked, right? You try to reconcile that with all the hype that’s going on with AI today, right? Large language models don’t return confidence factors. Large language models don’t return to you the geometry of where they found the data. They just return it, and then you gotta figure out, what action you take.

So, I think the guys that have been in the industry for a long time realize you can’t deploy something without a human-in-the-loop factor. The question is, how often do you have to invoke them? And then when you do invoke them, how quickly and efficient can you make their interaction with, you know, authenticating something, correcting something, and I think deploying AI in a way where you get those confidence factors like, like Cameron mentioned, and where you get the geometry.

So when you need to go look at something, you know where to go look. Right? Using those the appropriate way to make the instances where you have to have a human look at this be the most efficient process possible.

I remember being in conversations where people would confuse accuracy with straight-through processing and think they’re the same thing. Right? I got 98% of CR accuracy. That means I get 98% throughput straight through processing, not even close, right? I think, if my recall is good, getting 60 to 70% straight-through processing levels in any given deployment is pretty impressive, which means you need a human in the loop for the remainder, right? I think smart people are gonna figure out how to use AI to increase the efficiency of that human in the loop capability, but it is always gonna be there. 

Jim Wanner (23m 9s): Which is good for all of us. 

Anthony Macciola (23m 11s): Absolutely. And as good as AI gets, humans make mistakes. AI is gonna make mistakes. You need to ensure that you have a quality control mechanism in place to compensate for that.

Jim Wanner (23m 27s): Cameron, any last thoughts you’d like to chime in on? 

Cameron Boland (23m 32s): Well, I think that, actually, I wanna just expand a little bit more on this human-in-the-loop concept if I, if I may. You need to go and do an analysis of existing contracts or buying a portfolio of contracts, that sort of thing. The AI use case that I have seen is, basically, an ability to summarize all the major clauses within those contracts and pull that data out and put it in a tabular format where a knowledge worker can actually look at that and review it.

And that’s a case where that whole idea of building in a review process by a human, we’re helping them review those agreements more efficiently, but they’re still handling the final analysis of those agreements. And I think that really neatly sums up what this technology is gonna bring to the fore.

It’s really an aid to increase the speed, the velocity of work, as well as the accuracy because you’re taking away kind of what I’ll call the mind-numbing work, going through with a fine tooth comb and just surfacing those things which may be important and pointing people in the direction of where they need to look. And so I think that’s really the revolution that’s going on here. 

Jim Wanner (24m 49s): Sounds like a good compliance item.

Cameron Boland  (24m 50s): Absolutely. 

Jim Wanner (24m 52s): Great. Anthony, anything else you would like to add and chime in on? 

Anthony Macciola (24m 56s): No. I think that was a good example, Cameron. I think, you know, having AI go through what a human would do with a highlighter, right, and highlight what’s important to them, using AI to do that so you kind of create in real-time the cliff notes for someone to review, or to come back and make a recommendation. Right? Here’s the clauses that my organization has as requirements when we do leases or purchase contracts, and those ensure that we’re GDPR compliant. And I’d like the AI to go through and make sure that my contracts have some sort of reference similar to those pluses, and the ones that don’t, I’d like to know because I probably need to go back and renegotiate those or update them.

So, yeah, the idea, Cameron, of leveraging AI as an assistant as opposed to a complete automation tool, there are instances where it will definitely be able to help automation, but there are equal number of instances where it will be a great assistant. I think that’ll be an interesting thing to watch in full. 

Jim Wanner (26m 11s): Well, I think everybody out here is probably gonna appreciate the fact that you were both very kind and thought about this from the end user perspective. Is there anything else you would like to let the end-users know so that they can think about this before they begin deploying an AI solution? Cameron will give you a head start. 

Cameron Boland (26m 28s): Well, it’s always critical. Whenever a business is looking at a project or an endeavor such as this to start with the end users. They’re the ones who know where the pain is from a day-to-day perspective.

So, yes, you know, you know about, as a business leader, you know about compliance issues, and maybe you know what the numbers are, what’s coming through. It’s the end users who are gonna point you at what’s hard and what’s not hard. What’s the task I wish would go away? Cause I do it a thousand times a day. Those are the things that will point you in the direction of how the implementation needs to be done.

So what can the end users expect? I hope what they can expect is that businesses will start with them because the goal is to make them more efficient. If they’re more efficient, the business is more efficient and everybody wins. 

Jim Wanner (27m 16s): Great summary. Anthony, any summary you would like to share with our end users here? 

Anthony Macciola (27m 22s): If I can give any advice, don’t buy into the hype. You know, any new technology that comes up, I remember when big data came up, right? Any new technology that comes out has the old hype cycle, right? And everyone thinks the first time they see it, it will solve world hunger and end to all wars.

And then the, let’s just say the balloon comes off the roads when people realize the limitations and the implications that come with it. Right? So do not buy into the hype. The smartest thing an end user can do is make sure they’ve got a trusted advisor as a partner who can help them navigate the problems that they are trying to solve and help them thoughtfully determine where to apply AI, when to apply it, and to make sure that there’s gonna be a measurable, demonstrable benefit from doing that deployment. 

Jim Wanner (28m 22s): Gentlemen, I appreciate your time. Thank you so much, Cameron. Thank you so much, Anthony, for jumping on this podcast and sharing your years. Actually, I should say decades of wisdom with the end users in helping them understand how to look at AI, how they can deploy it so that they provide themselves with a better return on investment, and hopefully solve some of the compliance issues that they might be facing today.

So thank you, ladies and gentlemen. We appreciate y’all and look forward to hearing from you soon.

Caroline Ramackers (28m 54s): Thank you for joining us for this episode of the Orange Chair Podcast. This podcast is brought to you by KeyMark Inc. Experts in automation software for over a quarter of a century. Never miss an episode by subscribing to our channels wherever you listen to podcasts.

You can find us on Instagram at the Orange Chair Podcast. For more information, please visit our website at keymarkinc.com.

Until next time,

Latest Podcasts

Search