AI in B2B: Part 1

Listen now on your favorite streaming service!

presented by KeyMark

Understand AI in B2B in this conversation between Jim and Anthony.

AI in B2B: Part 1

Key Takeaways: 

  • Fresh AI hype overlooks some of the challenges of predictability, trainability, and privacy.
  • The IDP market has accelerated tremendously and is set to blow.
  • New vendors without technical debt will emerge to be major players in the AI-offering space, though some may cut corners of key AI considerations and cautions.


The following is a transcription from Episode 28 of The Orange Chair Podcast, “AI in the B2B Space: Part 1.”

In this episode, KeyMark CEO Jim Wanner sits down with special guest Anthony Macciola, an expert in the automation industry, to kickstart our three-part series, AI in B2B.

Listen to the full episode or any other episode by selecting your preferred podcast listening method on The Orange Chair Podcast page.


Caroline Ramackers (17s): Hello, everybody, and welcome back to the Orange Chair Podcast. Today’s episode is part one of a series discussing AI in the B2B space. Joining us today to discuss these topics is Jim Wanner, CEO of KeyMark, and special guest, Anthony Macciola, an expert in the automation industry.

Now, without further adieu, let’s begin.

Jim Wanner (41s): Good afternoon, everybody. I’m Jim Wanner. I’m the CEO of KeyMark, and I am pleased to be here with my friend Anthony Macciola. We have known each other for probably the better part of 18 years and have worked extremely well together and have done some amazing things within what we call now, the artificial intelligence machine learning space.

So, Anthony’s joining me today. We’re going to give you guys a heads-up as to how we got here within this artificial intelligence era, and we’re going to give you some vision as to where we can go with this and how you could apply it in a mission-critical application to make you and your organization thrive going forward. So, without any further adieu, I’d like to hand this over to my friend Anthony and let him introduce himself.

Anthony Macciola (1m 25s): Thanks, Jim. Yeah, it seems like we’ve known each other for a million years. We’ve been doing this for a very long time. For those of you who don’t know me, I’ve been doing this for a million years. I started in what was the capture market probably in the late 80s, early 90s, back when imaging was being done on mainframes with file net and IBM. I was with Kofax for about 25 years. I was an employee 34. For the last 15 years, I was there. I was their CTO who ran their advanced research lab and, was responsible for the introduction of a lot of their industry-leading products like, capture and their capture SDKs, VRS, KTM and some of those things.

After leaving Kofax and taking a little bit of a break, I then spent six years with ABBYY as their chief innovation officer. I helped take them out and beyond the OCR space into their current leadership role position in the IDP market and had a lot of success bringing a brand-new product to market for them called Vantage. So, I’ve had the good fortune to be able to play with some of the big players in the industry over the last 20, 25 years, and I’ve had a very good fortune to be able to work with some of the best partners in the industry like KeyMark and have a lot of joint success over the past 10, 15 years.

Jim Wanner (3m 14s): Great. Thank you, Anthony. If you could do me a favor, with your extensive background and understanding of where we are today could you just give everybody a snapshot of the last couple of decades as to how we got to where we are today and give them some ideas as to the origins of this wonderful technology that we now call artificial intelligence.

Where are we? How’d we get here?

Anthony Macciola (3m 38s): Yeah. It’s interesting to see the role capture has played in the emergence of AI. Right? I remember back in the mid-nineties, creating something called VRS, which, back at that time used neural net technology to optimize image processing. Back when no one knew what neural nets meant. I remember introducing in the early 2000s classification for the very first time and using support vector machines back then to be able to auto-identify the kind of document that you’re looking at. Back then, it wasn’t called AI. No one knew AI from, you know, the doorknob. But we’ve deployed throughout the evolution of the capture industry, what people know and love now as AI for ages, the ability to extract fields from forms. Today, people would call it key-value pairs. We were doing stuff back then way before the concept key valued pairs ever existed, right? So, it’s interesting. I’m on calls today when I’m talking to people who are noticeably younger than me, and schooling me on what AI can do for documents based and assuming, old guys like us, Jim, or I won’t loop you in with me, that we don’t know anything about this, and we’ve been doing this forever. Right? How do you do it?

Jim Wanner (5m 29s): Isn’t it fun when you get challenged? Like, “do you really know this stuff?”

Anthony Macciola (5m 32s): Absolutely. You know, how people are solving the problem, the technology tends to change and evolve and mature, right? And that’s all good. But we’ve been having to solve these problems for 25 years. And it’s interesting if we get a chance to get into it, you know, a lot of the new AI enthusiasm and hype that’s out there completely overlooks some of the challenges that we’ve had to deal with over the past 25 years, like predictability, trainability, all of that privacy, right? Things that for us, we’ve known are deal breakers. Which a lot of the new entrants into the market who are leveraging advancements in AI, are just either being ignorant of that or just conveniently ignoring it, hoping it wouldn’t be an issue. It’s going to be interesting to see how all the stuff emerges.

Jim Wanner (6m 33s): So to summarize, we really got started in this new artificial intelligence area using intelligent documents which is something that you know extremely well, and you’ve been doing for the better part of a few decades and successfully doing so. So, if you look at it in the past and now you see what’s happening, has there been a dramatic shift in what’s going on in today’s world so that the intelligent document processing market could change over the next 18 months, or do you think it’s going to be static?

The Future of AI

Anthony Macciola (7m 6s): No. I think there’s going to be a major disruption in what we all now call the intelligent document processing market. The IDP market for a million years, was called the capture market. Now, because of this renaissance, this rebirth, there’s a new name, there’s new entrants. There are all sorts of things.

The biggest challenge with the IDP market or the capture market has been the cost, effort, and expense associated with setting stuff up and maintaining it. As a result of that entry cost being fairly high, the market adoption was limited. It had to be big, big deployments to do this at scale, to justify it, right? And a lot of the older approaches that were very reliable and very predictable were rules-based. And if you were willing to take the time to learn what the rules were to set them up and to maintain them, they actually worked reasonably well, but there was usually a big dollar sign tied to setting these up and the time to value because of that effort required was pretty drawn out and for some customers that represented risk because you have to go a while before you sign any fruit from your investment, right? A lot of the advancements that the people who are going to employ AI correctly and put it in service to solve IDP challenges, the people who do that correctly are going to dramatically impact that upfront setup cost effort. Matter of fact, I fully believe that before we exit 2023, processing structured forms, parsing them, getting information out of them and parsing semi-structured forms like invoices, bills of lading explanations of benefits will come very, very close if not achieve a 0 setup environment. So with multimodal transformation and prebuilt models, and all of the new transformation technology that’s coming out in the AI space, putting those together, supplementing them, complimenting them, putting them in a way that you can deploy them effectively in a traditional IDP environment is just going to blow the doors off setup and maintenance, and make time to value so much faster and have a dramatic reduction in total cost of ownership. Which for us, you know, providers, partners, solution providers is, I think, is just going to explode the market opportunity for us going forward.

Jim Wanner (10m 20s): So you’re talking cost. I get the feeling that there’s more than one cost associated with this. So I think that what you’re chatting about is there might be some software costs associated with it, some implementation costs for technical people. But I get the feeling that what you’re really alluding to is maybe the cost from the end user’s perspective to gather samples. Is that where you’re heading with us?

Anthony Macciola (10m 46s): That’s definitely one of the elements. Right? I remember we both jumped about it. Right, Jim? You go into a customer and say, “Hey, can I get…” and you try to guess what the right number to say is so they don’t lose their mind? “Can I get 20 samples? Can I get 30?” And I can’t tell you how many calls I’ve been in where the customer throws up their hands, “I have no idea where I’m going to get 30 samples,” So, okay, “Can I get 10? Can I hit 5?” So, then when they, you know, commit to doing it, it’s like pulling teeth trying to get them and trying to get them redacted. So, it’s always been a challenge on the customer’s part. You know, you go back 18 months, 24 months to when deep learning was kind of the thing. And now you’re not going in and asking for 20 or 30 samples; you’re going in asking for 5000 samples because deep learning models need a lot of data, right? So the problem axis for a little while went in the wrong direction. Where it’s swinging back to the visibility I have is, yeah, I won’t need any samples. I’ve built general-purpose models for invoices or EOBs or bills of lading, either based on large public training datasets or just the technology that I’m using only needs 50 or 60, and I’ve already gotten those and I’ve already built them.

Jim Wanner (12m 24s): So they could use our existing infrastructure to actually build a model going forward.

Anthony Macciola (12m 29s): Yes. Actually, I would say one is slightly better. If you pick the right partner and you pick the right vendor, they should be able to provide you baseline models that out of the box are getting you precision-recall F1 scores in the low nineties. So, your out-of-the-box experience, doing nothing, should probably be at par or better than some of the systems you’ve probably been using in the past that have been set up and trained. And then your ability to do minor fine-tuning, because maybe your documents are somewhat unique, that’s almost a real-time ongoing experience that you can benefit from by just using the system. So, again, it speaks to time to value.

Jim Wanner (13m 25s): So, Anthony, that’s at least a 10% increase over the elaborate rules-based systems that we built in the past. So, if I’m a customer, and I hear all the hype that’s out there today and I’m trying to figure out what is fact and what is fiction, and I’m trying to decipher what I should really be pointing towards, how is somebody going to get a tangible value from this IDP
solution? What could they look for?

Value and Concerns of AI

Anthony Macciola (13m 54s): So if you haven’t heard about GPT, I don’t know what island you’re living on right now. Right? So, everyone’s talking about generative AI.

Jim Wanner (14m 5s): Even my eighty-nine-year-old dad knows it, Andy.

Anthony Macciola (14m 7s): There you go. And, everyone’s heard about large language models. Right? And guess what? They’re going to end all wars and it’s going to resolve world hunger. Right? And you just got to let the hype curve follow its course for a little while, but when you’re sitting there thinking about “large language models are going to solve all of my IDP problems” Ask yourself a
couple of questions. The large language models are always going to give you an answer. Right? And they’re going to give it to you very convincingly. How do you know the answer is right? What level of confidence do you get from a large language model, that the answer is right?

So around predictability, which in the IDP space, the level of accuracy that we’re held to, you know, you want straight-through processing levels in the 80% range. You want scores in the 95% range, right? How do you even measure your answers to determine whether or not you
did a good enough job? And, you know, the one question that always gets asked in the IDP space, right, because people just know this: what happens when it doesn’t work?

Well, the question that leads up to what happens when it doesn’t work, is the question, “How do you know if it didn’t work?” So, people spend a lot of time building human-in-the-loop experiences for what we do. Whether it’s for validation or QC. So, any of the AI technology that you’re deploying, if it can’t come back and tell you confidence levels relative to table extraction or field extraction or sentiment analysis, or document extraction, you’re kinda running blind and you just got to hope it was good enough.

Jim Wanner (16m 19s): Ouch. That’s never a solution, especially mission-critical or health-related.

Anthony Macciola (16m 25s): Well, yeah, and as we know, it’s hard to do one of these deployments and not have SLAs associated with it. I don’t know how you commit to any sort of SLA using a lot of the large language model stuff because it just doesn’t provide you that level of insight. The other thing is fine-tuning. We all know customers have unique environments from time to time, and what you might do from a general-purpose standpoint may need to be refined and adapted to accommodate that. How do you fine-tune a large public language model?

Jim Wanner (17m 10s): Sounds like a security issue, Anthony.

Anthony Macciola (17m 13s): It does, and that was the lead-up to the next one. Right, Jim?

Jim Wanner (17m 14s): Yep.

Anthony Macciola (17m 14s): Are you comfortable with putting your patient records, or your financial documents up through OpenAI, into the public cloud? So being able to run securely and protect the privacy of your customer’s information, being able to ensure the accuracy and predictability relative to you processing their data effectively and being able to fine-tune their deployment to
their environment; those are just a couple of examples of why just following the hype curve is going to leave you underwhelmed and hurting. I don’t want to diminish or leave people with the sense that I’m downplaying AI. There are underlying transformation technologies that are enabling things like ChatGPT that are amazing. There are ways to use those, not leveraging large public language models, but using multimodal approaches to doing this that can produce very, very impressive prediction and outcome results while maintaining predictability, while maintaining trainability, and while maintaining privacy. So, everyone should be very excited about all these advancements in AI. They just need to understand the right applications of those technologies to solve their IDP challenges.

Jim Wanner (18m 59s): I get the feeling you have a few vendors in mind who are doing this well. How do you see this landscape changing? Because you talked a little bit about our legacy applications and it seems that you’re looking to the future, you might have a couple of ideas regarding what this vendor landscape might look like and how they might map out new solutions in a new way. And I guess that’s part of that transformation you mentioned in the very beginning.

AI Vendors

Anthony Macciola (19m 24s): Yeah. There’s, like, almost like a couple of different buckets here, right? You’ve got the older incumbents that have been around. I don’t need to use their name. We all know them. 800-pound gorillas in the market, they’ve done an excellent job, right? But they’ve kind of become antiquated and they’re kind of weighed down under their own technical debt and the fact that they’ve got such large customer bases that expect things to work the same way. They’ve
made investments in them, right, so they just can’t throw everything out. So, it’ll be interesting to see how they adapt to what’s happening in the market. You’ve got kind of a newer kind of second-generation wave of, early entrants that have started to play with AI around this and many of them are kind of adapting, and I’m shocked, kinda jumping on the GPT wave that they’re just going to leverage that. And I would have expected them to understand some of the shortcomings that they’re going
to have to overcome. But it almost feels like there’s a second wave of vendors out there that are looking for the GPT checkbox in their marketing material. Who sees that as more valuable than actually solving problems. And then there’s a third wave of early inference that I’m watching right now. Very small. Don’t know if they’ll all be around a year from now. Who knows? But the approach they’re taking, because they don’t have technical debt, and they’ve come late enough of the game where there are enough advancements in AI out there that if they understand the IDP market and they can take those technology advancements and apply them in the right environment, supplement them, compliment them, and fill in the gaps
around some of the areas that I mentioned, I think are well positioned to disrupt the market. And it’ll be interesting to see if we go out 18 months and look at the normal analyst Magic Quadrants. I’m guessing 12 to 18 months from now, there’ll be guys on that list either in the
challenger or upcomer space that no one knows about today. And I bet you 24 to 36 months, some of those guys will be in the leader quadrant. It’s going to be an interesting evolution to watch.

Jim Wanner (22m 4s): That’s quite a transformation. To go that far in two years.

Anthony Macciola (22m 7s):
Yeah. It’s all about the pace of advancements from the technology standpoint, right? But there’s another component to it because there are a lot of fantastic companies out there with great technologies that never made it long-term. Right? So, it’s about picking the right
go-to-market. It’s about picking the right ecosystem. And I think Jim, you know, companies like yours, KeyMark, play a pivotal role in the evolution and the transition of the market going forward because new technology entrance needs a route to market. Right? And the dumb ones are going to try to recreate that route to market themselves. The smart ones will realize their prowess and their value is around the technology and enabling the right channel with the industry domain expertise, the sales, marketing, fulfillment capabilities, and empowering them and enabling them to bring this to market.

Jim Wanner (23m 17s): There’s a lot to be said about that one. We fundamentally believe that Hyperautomation is still the way to go forward. That these tools of stand-alone applications will not ever be able to be successful.

Join us Next Time

Caroline Ramackers (23m 28s): Thank you for joining us for this episode of the Orange Chair Podcast.

Stay tuned for our next episode of the Orange Chair Podcast to hear a continuation of this conversation
centered around AI in the B2B space.

This podcast has been brought to you by KeyMark Inc.
Experts in automation software for over a quarter of a century. Never miss an episode by subscribing to our channels wherever you listen to podcasts. You can also find us on Instagram at the Orange Chair Podcast. For more information, please visit our website at Until next time!

Latest Podcasts