The Future of ECM

Listen now on your favorite streaming service!

presented by KeyMark

Explore the future of enterprise content management at Hyland on the Orange Chair Podcast

The Future of ECM

Key Takeaways

  • Hyland is committed to bringing more AI capabilities to ECM and content services.
  • The growth of remote work and a new generation of workers inspires ECM that is accessible from any access point, device, or use case.
  • ECM and content services tools imbued with AI capabilities are available to us now and can help us achieve fundamental optimization today, while other AI applications take time to mature.
  • Technology changes so rapidly. Today’s organizations must master agility to continue to be successful.

Summary

The following is a transcription from Episode 26 of The Orange Chair Podcast, “The Future of ECM.”

In this episode, KeyMark hosts John Harrison, VP of sales, and Ben Vaughan, director of solution architects, sit down with Hyland’s VP of Product Management, Scott Craig, to discuss the future of ECM, Hyland’s content services roadmap, and the implications of AI in ECM.

To listen to the full episode or any other episode, select your preferred podcast listening method on The Orange Chair Podcast Page.

Transcription

Caroline Ramackers (17s): Hello, everybody, and welcome back to the Orange Chair podcast.

In today’s episode, our host, John Harrison, VP of sales, and Ben Vaughn, the director of solution architects at KeyMark, sit down with Scott Craig, the VP of Product Management at Hyland, to discuss the future of ECM and Hyland’s vision for Hyperautomation.

Now, without further adieu, here’s today’s hosts.

John Harrison (42s): Hey everyone, and welcome back to another episode of the Orange Cher podcast. I’m your host today, John Harrison. I’m the VP of sales at KeyMark. And joining me today, I have a co-host, Ben Vaughn, with KeyMark. He’s our director of solution architects. And today’s episode is about the future of ECM and Hylands vision for Hyperautomation. And we have a great guest today, Scott Craig, and I will let him go ahead and introduce himself now.

Scott Craig (1m 14s): Yeah. Well, thank you. So, yeah, my name’s Scott Craig. I’m the vice president of product management at Hyland. I’ve been with Hyland about 5 years now. And, I came from Perceptive Software, acquired by Hyland about a little over five years ago.

I’ve been in this space almost 19 years now, and, looking forward to, discussing about the future of, ECM and Hyperautomation.

John Harrison (1m 41s): Well, thanks for joining, Scott. We appreciate you doing that. And, and, Ben, why don’t you tell us a little bit about yourself as well?

Ben Vaughan (1m 47s): Sure. Thanks, John. So, as John mentioned, my name is Ben Vaughn. I’m the director of solutions architects at KeyMark. My team’s kind of responsible for being in between the sales team, and the implementation team, been with KeyMark for 17 years now.

John Harrison (2m 1s): Thank you, Ben. Well, guys, we’re gonna, we’re gonna get right into the exciting stuff we have to talk about today. And, Scott, again, thank you for joining us. What we’d really like to cover today, and I know that the discussion can go many different ways just based on our conversations prior to this, but Scott, if you could maybe lay the groundwork by just giving us an idea where you guys at Hyland see, this market going from the ECM perspective and also how that fits into the overall Hyperautomation discussion that’s happening across the industry.

The future of ECM at Hyland

Scott Craig (2m 37s): Yeah. Absolutely. I think one of the things that we are all seeing happen is that the amount of content organizations have to manage just continues to grow exponentially. And the types of information, the types of content are also becoming more diverse. We have audio and video, obviously, is a factor there, but just, the amount of information that we’re expected to manage and provide value around is growing an incredible scale.

And so I think one of the areas when we talk about the future of enterprise content management, a lot of it is around effect. How do we manage this at scale? And that’s where things like Hyperautomation, in my mind, kind of tie together. So we have to be able to create ways to ingest, manage, life cycle manage that content, but do it in a way that is sustainable for the organization. And what that means is that a knowledge worker can’t be presented with a Google search that brings back 1000 and 1000 of results. Right? Maybe they’re getting a phone call and they have to be able to answer a question. They can’t go through millions of documents.

We have to be able to help provide them the information that they need in the context that they need it really, really quickly. Hyperautomation is all about adding that automation and intelligence into all of our processes so that we are able to make faster decisions, more efficient decisions, but leverage the power of all the information we have in our organization. These things do tie really really well together, and a lot of what Hyland is focused on is bringing automation AI-type capabilities to our platforms so that our customers can effectively manage and utilize the content that’s within their repositories today.

John Harrison (4m 53s): Thank you, Scott. It’s exciting to see what’s going on in the market. And I think that one of the things that is maybe accelerating this need for being able to handle significantly more information is what happened when we were in the middle of COVID, or early in COVID when we realized we’ve got all these people remote. They’re not in a central location anymore, they’re all over the place. In fact, we’re hiring people and saying, “Hey, you can work out of your home now.” A lot of companies have gone completely remote. And I think we’re lucky in one sense that we’re in an industry where we can help serve those people better and allow that change that we’ve gone to just in terms of the way we work now and how we have to collaborate from a distance. But there are a lot of companies that are looking at that as well. That we, you know, talking about Hyland OnBase and all the technologies that Hyland offers, we’re now in a situation where we’re all looking at AI to help facilitate that.

We’re looking at ways to bridge that gap from a communication standpoint where we’re not all in the office. We can’t get up and walk down the hall and see everyone we work with like maybe a lot of people did before COVID.

How do you see the Hyland platform as it grows and incorporates some of these new technologies and evolves — how do you see it, not only helping organizations move forward and become more efficient and able to handle that type of setting better, but also, how is that going to integrate into other platforms that are out there? You know, the Microsoft’s, Oracle, SAP, those guys that we have to coexist with and have forever, but now we’re all looking at AI and incorporating that into our platforms.

Scott Craig (7m 5s): Yeah. It’s a fantastic question. And, it’s one that talked to a lot of customers, do a lot of market research to try to figure out the best ways to help address that, but I think it comes down to a couple of different foundational components.

The first is, and you talked about interoperability, but I think just in general, for us to serve our customers where their users are. So, you know, and that can really be two different things because you talked a lot about kind of our internal coworkers being remote. But during COVID, there was also plenty of businesses that said, “We can’t allow customers in, and we need to share information,” the potential we’d had behind the firewall that required you to come into our business that we now need to share with our customers. In health care, it’s getting access to your patient record. But in government, it could be giving access to constituents on any number of different things. Being able to serve your constituents, your patients, your students, your customers, where they are also potentially means opening up some of these repositories in this content that had been kind of locked behind the firewall, so to speak. But it also brought the challenge for our coworkers and ourselves working remotely, that we need access to this stuff as well.

So I think the first thing is it drives the need for APIs and to architect products in a way that allows us to interoperate and provide access points that may be different than we had done traditionally. And in an old server-client environment where you could have people working off desktops. We could do things quite frankly, very differently than when that person now may be at home and doesn’t have the bandwidth that they would have had in the office, and maybe the screen that they’re using is different. They might even in some cases, be on a mobile device. So I think the first thing is architecting your products to leverage APIs, and that is something that’s happening across the industry, but that’s, to me, how we make it possible to support the ecosystem, but it also drives our ability to support the user experiences for our customers, and our end users. And then the last that I was talking about is the need to be able to share content outside of traditional organizational boundaries. We have been investing a lot of time and effort in the technology to do that, but also the licensing and the pricing, right? If you’re gonna open up your, I’ll just use a higher ed example, but if you’re gonna open it up so the students can access their records, they may do that once or twice a year, they’re not likely gonna be in there all the time. So an organization is not gonna pay the same to give access to an external user like that as they might for a knowledge worker that’s in the system all the time. So it’s a lot of different things that have to converge to make that work. We have to build the right product with the right architecture. We have to provide the right user experiences that support those remote and disconnected workers. And then, we have to have the right pricing mechanisms in place so organizations can affordably do that. So, to me, it’s a combination of those three things.

Ben Vaughan (11m 12s): So, Scott, I’m gonna chime in here real quick. Just kind of based on what you were talking about with user experience and things like that. You and I are both kind of old-timers in this industry — almost 20 years each. Doing this kind of stuff. And I mean, back then, whenever we first started, it was really thick clients. Right? I mean, people who needed access information had a full client sitting on their computers that had to make direct database connections and things like that. The web-based stuff was kind of just starting its infancy. We were restricted by some browser limitations and things like that to go to a true thing client at the time, and kind of as we were prepping for this, I went back and thought about it, and I was like, man we were doing this stuff before the iPhone was even invented. Right? There was no concept of a smooth user experience, a smooth user interface, and really, because of things like that, it’s really been pushed down to people where they expect a lightweight client, something that’s a nice, smooth, easy interface that’s not hard to use, not hard to learn, not hard to teach new people. I’ve seen clients and an application sort of go that way, you know, more lightweight clients, mobile clients, things like that. I expect that it’s gonna keep heading in that direction as far as user experiences. Is there anything you can talk about just as far as what you see the next year or five years as it relates to how users are gonna interact with the system?

Scott Craig (12m 44s): Yeah. You’re absolutely right. Kind of a funny anecdote — when I started at Perceptive, the debate was should we buy Blackberries or iPhones, and we chose Blackberries. So that shows you how old I am. Yeah. We all chose the Blackberry until it became very rapidly obsolete, but you’re absolutely right. I personally believe that there will be a place for some amount of desktop application, and we’ve all worked with clients that have high-capacity scanners, and I just mean, I guess until we completely get rid of paper, I still think that there’s some need for that type of desktop application that really takes advantage of the processing of a desktop computer and a scanner like that. But more and more of our work is needing to go into really a mobile interface. And that’s where it starts to tie back in, you know, the first is we have less screen real estate. And so what we present to the end user has to be thought through much more carefully. And I do think that that ties in somewhat to the AI. Obviously, there’s AI that does refinement of content and helps us extract information, identify what the content is, index it automatically, there’s things like that. And then and then we start to get into the next generation of AI, which is really more providing insights and predicting what an end user is doing. So kind of auto generating the content that an end user needs is part of that is being driven by using those interfaces. And I think mobile has gone from being kind of a new idea to being table stakes, and as we build more mobile interfaces, we’re really having, and the types of things people are wanting to do with mobile interfaces, is really necessitating the use of AI, because we’re gonna have to get them the right content at the right time in the right interface. And in some cases too, I don’t know if you guys have run into this. Some of the content doesn’t present well on a mobile interface, you know, even reading an invoice on an iPhone is not a particularly great experience. So how can I kind of take off the information on the invoice that you need to make a decision without making you read the invoice? And there’s worse examples, obviously, you can imagine trying to read a transcript or there are a lot of documents, architectural plans that just really won’t work for a mobile device, but we still have to find ways to enable that worker to be able to do something with it.

So, again, we’re investing a lot in flexible user interfaces that flex depending on where it’s being accessed. So you have kind of a consistent experience from a browser to an iPad to an iPhone. But it all comes down to what is the action that we’re trying to enable and trying to automate as much of that as possible so that you’re not having to read an entire document, but you’re getting what you need.

But, yeah, I don’t believe desktop clients are completely going away, but, having flexible, adaptive web interfaces is absolutely critical. And I’ll share one of the things on the road map that I’m personally excited about from an OnBase perspective is we’ve got this project that’s launching in 23.1. It’ll be available this fall for OnBase users. And it’s an app-building client, and it’s a low-code tool that essentially just allows you to create relatively simple industry solutions. And I think that part of this transformation is putting some power into the hands of partners and technical customers because the need to build these interfaces is faster than Hyland can build them.

So I can’t build all of the user experiences that you and your customers and then their customers need. So we have to provide the tools to do that. And that’s the other, probably, component of this is that low-code enabling customers, services, partners, to build the applications that our customers need to provide these mobile interfaces. So it’s a lot of exciting things coming together in one place, but, I hope that gives you some perspective about kind of what we’re thinking about.

Ben Vaughan (17m 48s): Yeah. Definitely. And, I mean, I’m glad you brought up the app builder. That’s something I’m personally excited about coming out, you know, the ability to go in and really build persona-based front-ends application so that I have everything I need for my job kind of right there at my fingertips rather than having to click around and find what I need. So I think that’s definitely gonna be a game-changer.

One thing I did want to kind of go back to, and we mentioned it several times, just the concept of this artificial intelligence. Right? I mean, we see it in every article that’s out there. Every publication says “AI, AI, AI is out there!” And it’s kind of scary looking at it if you don’t really understand what that means. I’ve put it out there to folks ‚— AI is not new. Artificial intelligence has been around forever. If you think about it, you mentioned invoices earlier, if you build a workflow that says, “Hey, evaluate the price of this invoice, and if it’s above $1000 send it over here, if it’s above $10,000 sent it over here.” I mean, that is artificial intelligence being programmed into software to make a decision that a human would normally make. Right? So kind of putting it in that context, it doesn’t necessarily seem as scary. But with what’s out there and what people are reading in the news, what do you think organizations can do to be prepared to get some of the user pushback on that? You know, hey, I’m nervous about AI being used or robots are taking our jobs, that kind of stuff. Do you see a future where AI is doing a lot of that stuff? Is there still gonna be a need for human-based decisions? Is it going to be kind of hand in hand, AI and humans working together to automate business processes? Where do you kind of see that going?

The effects of AI on ECM

Scott Craig (19m 33s): Yeah. It’s a fantastic question. And I mean, quite frankly, it starts to get into an almost ethical discussion. Right? What happens in an economy where robots can do a majority or a great portion of what we do today manually? Bill Gates had an interesting proposal where you tax robots to provide living wages to the people that the robots are taking the jobs away from. I don’t know what the right answer is, but I will say that where we are on the AI spectrum, we’re just now starting to accept things like self-driving cars but, you know, we’re not really ready to accept — as an example — allowing a robot to make a clinical diagnosis for medical condition. Right? We still want a doctor to look at that and make a decision. And I have a feeling that for most organizations, at least for the next 5-10 years, they’re gonna want a human in the machine to be that last step of validation. So it is possible that I could do a complete mortgage application in which a human doesn’t really touch it up until maybe that final approval step. But where we are headed, I mean, the whole concept of Hyperautomation is the idea of fully automating processes that require no human interaction. And there are examples of that that I do think are scary to some degree, and then there’re others that make a lot of sense.

I talk sometimes about how grocery stores are investing a lot of time in figuring how to predict and forecast their stock and automate restocking. They’re still gonna need people to stock their shelves, and they’re still gonna need people to — you know, you go to the grocery today, and half of it is self-checkout. So I do think some of these things are gonna inevitably move that direction. Deloitte did a survey just recently about AI trends and 94% of business leaders are saying that AI is critical to their success in the next five years. And so as you said, some of these seeds have already been planted. We’re already doing OCR. We’re already doing AI in a lot of areas. If you’re using Outlook today, you’re probably taking advantage of some AI and some automation that’s happening there. But as business leaders, we’re now having to look at what is the next step. So I don’t think anyone’s gonna go from no automation to Hyperautomation. It’s all gonna be part of a transformation. It’s part of that digital transformation. The first thing we gotta do is get our paper into a digital format, and then we’ve gotta extract information off of it. Once we’ve done that, then we can start to apply proactive predictive AI to that to say, you know, based on what we know and our learning engines, we should approve these things or we should review these things or whatever that is. Part of the process of making it comfortable for the rest of our team members and organizations is just explain that t’s gonna be gradual, and we need people along the way to help inform, and guide these. But, I do believe that over the next 5-10 years, the skill sets of what knowledge workers are gonna need to have is gonna be different than it is today, and probably the sooner we accept that, the better.

John Harrison (23m 32s): It’s interesting, Scott. You mentioned a few minutes ago the mobile capability that was talked about as a big deal maybe five to seven years ago, mobile capability being a differentiator with different providers and technology, but it’s become a must to even be invited to have a seat at the table. One of the things Ben and I were talking about offline, I can’t remember when this was, it was maybe a week back, we were talking about now that we’re going to a lot of contracts and key information being sent to a mobile device, whether it’s a phone or an iPad or something like that and you’ve got executives that are being asked to sign and put an electronic signature on a document via their mobile. And while that is a great option to have, one of the things that Ben and I were talking about is that sometimes the important stuff gets left out. I was thinking about that as you were talking through your scenario there, where you’re maybe not able to send it on a mobile device like you need to be able to address that and what might happen. And we’ve seen this happen ourselves where you have a discussion down the road with that executive and they’re wondering why something isn’t included or is included or is different than their expectations, and it was because they weren’t really able to read that detail when that document came through and needed their signature on it or their approval, and so it got missed. It’s all fine print on a mobile phone. How do you see AI helping address that?

I mean, I think I heard that maybe it’s pulling out the important items and presenting those in a different way for the user, but is that something that you guys are working on addressing, or is that where the low-code capability you were talking about a few minutes ago might fit in to help continue to facilitate being able to do things on a mobile device, which obviously improves efficiency across organizations that use our technology?

User experience in ECM

Scott Craig (26m 7s): Yeah. I think contracts is actually a really good example of — so first of all, I agree with you. I mean, I just got a new OS update on my phone, and it popped up the user terms and agreement, and it requires you to scroll through the whole thing before you can approve it. But it’s a terrible experience. I mean, besides the fact it’s written in a way that makes it difficult to understand, It also is incredibly long, and it isn’t easy to read on a mobile device, but I just ended up clicking through it and approving everything. And that as an executive or as an approver of something like a contract, that could be really risky. So I think the first is that, you know, we talked about this before, I don’t expect an architect to walk around with an iPhone at a construction site; maybe they do, but I mean, I couldn’t do it and say, “Alright, here’s where the bathroom goes and, here’s the plumbing and we got all that figured out.” So it may be that choosing the right device for the right use case, may make a difference. But that being said, where AI, I think, can help is in the case of a contract use case is I could highlight what has changed since last time and pull that out into a screen that then only shows them the differences. Say, “These are the contractual terms that changed from the last time you saw it to now.” I would probably feel comfortable doing that on an iPhone. Now I wouldn’t want to read the whole contract, but if you tell me clause 3a sub-bullet four changed, and here is the language of 4, and here is the language now, I could say, “Yeah, I’m good with that and here’s the bottom line,” or whatever that is.

And we do that with invoices today. If you look at some of the invoice automation solutions from Hyland, the invoice is available in our mobile app. You can go read the invoice. And, like an organization like yours or mine, our invoice from Dell every month, I bet it has thousands of line items. We probably buy 400 laptops every month. And so you just think about how complicated that invoice is. Well, when you go into our invoice approver app designed for executives and for people who approve invoices on the run, it just shows you what the PO amount is and what the invoice amount is. Because chances are if those things match up, you’re good to approve it, and in many cases, they just automatically approve it if a PO matches. But maybe at a certain dollar amount, you have to look it over. Now you can go look at that, but separating out some of the line items, having what is the bottom line of that invoice are all things that make it possible to do that on a mobile device.

So I think this is where the user experience side to me is so exciting and fun is thinking about; how have we done these things in the past and how could we solve them for the future, is quite frankly finding ways to be way more effective efficient than we probably have ever been. And it’s using the technology and the devices and the tech and AI to help us make decisions more effectively, more efficiently, and do our jobs – I’m reusing the words effective and efficient – but also secure way possible. So, yeah, I definitely think that there are ways that AI is gonna fit into the intersection with mobile to make those experiences possible and work really efficiently and effectively.

John Harrison (30m 5s): It’s interesting, the folks that I talked to about the discussion Ben and I had, everyone has had that experience either personally or from a business perspective, having a customer or someone you work with that has run into that. I was laughing when you brought up – to kind of change gears – I was laughing when you brought up the Blackberry. I was one of the last holdout big fans of the Blackberry because of the typing capability. And I didn’t need any pictures or internet access if I could text easily, but I finally had to give it up and make the move to an iPhone.

Want to switch gears just a little bit here. And one of the things that fascinating to me, and I think there’s gonna be a lot of studies done around this, that impact where we’re going and how software is embraced and utilized in the future and even implemented. And I think it’s a big part of the speed of change in technology. I think about back when, you know, Ben was talking about you guys being in this industry for 20 years; I’ve been in the enterprise software industry since 95. So almost 28 full years now. And I remember in the late nineties when one organization would come out with a new feature, it would take their competitors maybe a year to a year and a half to answer that. And now we’re in a situation where if you come out with something and tout it as being a great capability that you’re bringing to the market, everyone else has it within a couple of months. Or it’s something that they already have a low-code development type capability that they could fire that up and address that capability.

My question is a little bit different from that perspective and it’s, how do you see the younger generations and their embracing of technology? We’ve got we’ve now got generations entering the workforce that have grown up with a touchscreen, or they’ve grown up with some type of device in their hands and it’s just in their nature to understand technology and how it works and how it fits together. How do you see that impacting where we’re going from a technology perspective and whether it’s going to get easier to implement some of these things – I think they will embrace AI and utilize it in great ways, but interested in your thoughts on that and maybe some of the discussions or data that you’ve looked at around the younger generations, the way they embrace technology and how that might impact the road map at Hyland.

New generations and new technology in content services

Scott Craig (33m 14s): Yeah. You know, it’s funny I was talking to — I have a daughter that’s 23 and just graduated college. She moved to Austin, and she works at Dell. And we were kind of talking about what are the things that her kids will do that puzzle us, because we’re talking about my parents and some of their challenges in using mobile devices. And I was speculating, I don’t know if it’s gonna be true, but I think VR is something that would be very disorienting for me. Like, if my work environment was VR and I had some sort of headset on. And you can kind of imagine the sci-fi stuff around it, but that is a possibility down the line. But what I think you’re getting at is important, which is that each generation that comes into the workforce is bringing, I think, a higher level of acceptance of AI and virtualization. Whether that’s on their mobile devices applying filters to pictures, even in some cases allowing their phones to help automate their lives and make things easier. The amount they share online, you know my social media stuff is all very personal. I only share it with friends and family that I know. I don’t have open accounts. But my kids have a little different philosophy. I think what they’re comfortable sharing and having open to the world and allowing AI to influence is probably gonna be different. I think they will be less intimidated by the idea of a self-driving car. I think they’ll be less intimidated by the idea of computers helping us make decisions. So that is gonna end up driving the types of solutions and products that we build and offer into the market. I think it will be gradual, but I think a really good example, something that’s kind of subtly introduced itself are chatbots. There are automated chatbots that we interact with now all the time, and we kind of in prep for this meeting, we had kind of some funny examples of where our generation — and I won’t give any ages — but let’s just say people who were working in the nineties, we want to talk to a real person sometimes. My kids, that’s the last thing they want to do. They would rather go through the automated system and not talk to a person. Well, because of that, you know, things like these chatbots pop up where you’re not interacting with a person but you’re asking some basic questions, and it filters those, that to me is just one step of the process.

Imagine having those types tools embedded in your enterprise applications. What if the interface isn’t a kind of Google search, but it’s more of a chatbot? And I’m saying, I’m dealing with a customer who’s wanting to open a new account, and they don’t have this piece of information. What should I do? The chatbot tells them what to do next. It’s kind of a workflow process. Right? I mean, it’s exception handling. It’s all this stuff. Well, I feel like that stuff is where AI is taking us, is it starts with I’m helping inform decisions based on business rules that are understood, and as that becomes more comfortable and predictable, then you start to ask, do I really even need to ask the person to do this? Or could the chatbot do this and I don’t need the person to be in the middle of it, the customer service person or whoever it is? So I do think it’s all tied together, and it’s hard to predict exactly what the next big technology change is, you know, growing up at high school in the eighties, the idea of a mobile phone would have been exciting, and it would have made sense just like in many ways VR is possible and it makes sense. But I don’t exactly know. It would have been hard for me to articulate that you’d have one device that could be your music, your entertainment, business, communication, all of that in one device. It’s hard to exactly figure out where it goes next, but that’s where the technological underpinnings of having everything being driven off of APIs is important that it gives us the flexibility to adapt and adjust as new generations bring acceptance of new ways of working and doing things. And for me, as a technologist, I love it. I mean, I eat it up. But that being said, I still want to talk to a real person when I call United when my flight gets moved.

John Harrison (38m 14s): Yeah. it’s funny you bring that up. I’m the same way. I know, I think Ben and I talked a little bit about that last week as well while out of the country and discussing our travels, and it’s interesting — I want to embrace the machine that answers me on the other end of my call. I really want to, but I just can’t yet. And then there’s other areas where I’m totally fine with that. And it’s typically in technology that I’m talking to people about why they should utilize that or we’re selling it and helping people implement it within their business. Ben, I think you had a comment to share.

Ben Vaughan (39m 4s): Yeah. I mean, I’m kind of in a gray area there in the middle. I’m old enough that, I haven’t grown up without a lot of technology, but have used it and I’m comfortable with both of those areas of using some automated methods in some places and manual methods in the other. I want to talk to a real person. But I feel like we should’ve gotten somebody in their twenties on here to give us their perspective on what the future looks like.

But you’re kind of hitting the nail on the head there. I think the emergence of things like the chat GPT stuff that’s out there now. Right? I mean, people are really getting used to the idea of I’m gonna go to one place and type in what I need, and this computer is gonna deliver it back to me. It’s kind of scary in some ways when you think about it. If you think about it from the perspective of, I’m a customer service rep. We talk about calling in and talking to a human. So I called in and talked to a human and they put me on hold for seven minutes while they go search across their systems to final my information, go pull up four different screens and other applications to get everything. Versus, if that customer service rep has a screen up that says enter customer ID or customer name or whatever, and they type it in and then behind the scenes an RPA bot goes out and hits all four of their systems, it gathers all that information and pops it up right on their screen. That’s versus me sitting on hold for eight minutes. I’m now staying on with the customer service rep who in, 30 seconds later, they have all the information that they need. So I think, it’s areas like where it’s really going to speed things forward and really make it a much better experience on both ends. Right? I mean, it’s a better experience for the customer service rep. It’s a better experience for me as a customer.

I was just curious, I feel like there are probably folks that may be listening to this from all ends of the spectrum, right? I mean, there may be some people who have really embraced it and really are using it in multiple different areas of the organization, and there are others who are just kind of dipping their toe in the water — kind of trying to understand what that means. Is there any resources out there? Any areas that you would recommend to say, “Hey, go read this magazine or read the certain blog or podcast or Twitter account,” or whatever that could help people kind of continue to learn about this and understand it a little bit more?

Scott Craig (41m 29s): Yeah. That’s a great question. I have to think a little bit about specific source, but I will tell you that I find that there — and I have to get the right names of people — but there are some even just at Hyland, we’ve got a couple of people who are posting on LinkedIn regularly about trends in AI specifically around content services that I think could be a really valuable resource. So if you follow Hyland on LinkedIn, we post some really good content that I think is grounded in reality. And that’s not to say the chat GPT and other things are not reality. But what can I do today? And you gave a great example using RPA. I mean, that’s a product that we have in our hands right now. Any organization can utilize robotic process automation today. It doesn’t matter how old your systems are. It doesn’t matter what you’re using. There is a way to leverage that. And that’s starting to get you on that train towards Hyperautomation. Which for some businesses may be ten-plus years away, which is fine. But we gotta start taking the steps now. And I think RPA is a great example of where you can start to do that. It just collected and grabbed information that’s difficult to get out of another system and pulls it out right in the context of what your user needs to do.

Final thoughts

John Harrison (43m 2s): Scott, thank you. This is one of those conversations I think we could probably spend another four hours talking through all the different things we’re seeing and the exciting stuff that’s out there in the future for all of us to not only work with ourselves but also provide to our customers and help them continue to evolve and provide more value with the Hyland platform.

I want to share — I know we’re coming up on wrapping up and I think we’ve hit a lot of different points, but — just from my perspective, listening to what we’ve talked about, I think there’s a couple of themes here that come out of the discussion with you, Scott, and some of the points that Ben was making as well. One of those is being that there’s, with digital transformation, everyone’s talked about that and that’s a broad word that I think is a little overused in many cases because it can mean different things to everyone you talk to. There’s not one definition of what that means to an organization. But where we’re going with platform and where we’re seeing other big technology, strategic partners out in the marketplace go is that they’re moving more towards AI. That’s on the top of everyone’s list. But one of the things that I heard today was that things are changing so fast that one of the capabilities that organizations need to continue to be successful is just agility and a willingness to learn constantly and take in what’s going on. Because the pace of change is so fast in the business world and the environment can change so quickly on a dime with a new disruptive technology that our ability to make quick decisions about where we go and how we work with other organizations, and how we fit into a certain technology stack with a customer is extremely important. And so a lot of it has to do with management, with decision making, and just overall agility of an organization. And that kind of became a theme in listening to your answers. I really do believe that’s gonna be a key to success moving forward for any technology organization, including ours, and how we work together to go service our customers.

But I’ll turn it over to you guys. Scott will start with you just for your final thoughts and can’t thank you enough again for joining us on the Orange Orange Chair podcast.

Scott Craig (45m 57s): Thank you so much for having me. This has been really fun. And, I love talking about these these subjects. And your last point is so important. It’s why low code gets tied into discussions around AI, is that ability to be adaptive and flexible and agile. Because you’re absolutely right that it is changing all the time. And low-code tools give us the ability to do some of that. So I think they really do go hand in hand. But thanks again for the opportunity. It was fantastic to get a chance to talk with you guys and hope we can do it again sometime.

John Harrison (46m 46s): Absolutely. Thank you, Scott. And Ben, any final thoughts from you?

Ben Vaughan (46m 49s): Yeah. Just wanted to again thank Scott for joining us today. I’m lucky enough that I think this is the second time now that I’ve moderated something with Scott, and it’s always a pleasure to have him on and share his insight. He sees a lot of stuff out there in the industry, so I’m always excited to hear what he has to say about it. So, thanks again Scott for joining us.

Scott Craig (47m 7s): My pleasure.

John Harrison (47m 8s): Awesome. Well, thank you guys. It’s a blessing for me just to be on the call with you two smart guys and it’s really enjoyable listening to you guys share your thoughts on where we’re going and a lot of exciting stuff for us in the future. But, thank you guys for joining today, and really appreciate it. Look forward to doing this again here, real soon.

Caroline Ramackers (47m 30s): Thank you for joining us for this episode of the Orange Chair podcast. This podcast is brought to you by KeyMark Inc. Experts in automation software over a quarter of a century. Never miss an episode by subscribing to our channel, however you listen to podcasts.

You can also find us on Instagram at the Orange Chair Podcast. For more information, please visit our websites at www.KeyMarkinc.com. Until next time.

Latest Podcasts

Search