An immersive conversation on AI’s operational efficiency in today’s legal industry – find out more with Helm360

Welcome one and all to The Legal Helm podcast for August. In this episode, Bim speaks with Steven Choi, the co-founder and CEO of Traact, a cloud-based platform designed to help automate those repetitive (but crucial!) administrative tasks in legal and financial spaces. Bim and Steven talk about the three stages of “the hype cycle,” where AI works best, and how legal tech companies can avoid ending up like Amazon Go. Be sure to listen to the end of this illuminating conversation!

Your host

Bim Dave is Helm360’s Executive Vice President. With 15+ years in the legal industry, his keen understanding of how law firms and lawyers use technology has propelled Helm360 to the industry’s forefront. A technical expert with a penchant for developing solutions that improve business systems and user experience, Bim has a knack for bringing high quality IT architects and developers together to create innovative, useable solutions to the legal arena.

Our guest

Steven Choi, co-founder of Traact, a platform that helps legal and finance teams automate repetitive administrative tasks and improve the efficiency of their back office. Stephen has played various roles at Google, Uber, Olivia, AI, and brings a wealth of knowledge around AI just general technology experience of the legal and finance industries.

Transcript

Bim: Hello Legal Helm listeners. Today I’m delighted to be speaking with Steven Choi, co-founder of Traact, a platform that helps legal and finance teams automate repetitive administrative tasks and improve the efficiency of their back office. Stephen has played various roles at Google, Uber, Olivia AI, and brings a wealth of knowledge around AI and just the general technology experience of the legal and finance industries. I’m really excited to be talking to him today about his work at Google, his insights into artificial intelligence and its application in the legal industry, as well as his vision for the future of Traact. Steven, hello and welcome to the show.

Steven: Hi Bim. Thanks for having me today.

Bim: It’s great to have you here. So I thought it would be great for our audience to first learn a little bit about yourself, Steven. So maybe you could just kind of give us a bit of a run through of your journey. You’ve worked at some of the big names, Google and Uber. I’d be really interested to kind of just talk to our audience a little bit about what you were up to in those roles.

Steven: Yeah. So I’m an actual engineer. Not like anything that’s related to legal, and have a very lack of knowledge around legal, actually. But there was this huge issue that we actually interfaced when we were trying to close out our last organization as an M-and-A transaction, which got me into the legal space.

But traditionally, I’ve been an engineer. Basically building, autonomous products such as in the self-driving car industry, or drone or satellite systems, that kind of require some intense, high-tech level insertion thatgoes around a detection, classification, and prediction modeling system.

So that’s been my background and generally my speciality in organizations, like you mentioned, Northrop to Google to Uber. Also building out AI solutions for the FinTech organization as an engineer is kind of my background.

Bim: And so that transition—from kind of, like, a great history in terms of some of the autonomous vehicles and all the other pieces of the puzzle that you’re involved in—led to Traact. It would be really great to understand a little bit about what Traact is as a platform and what it delivers, particularly to the legal industry and legal departments. That’s useful for them.

Steven: Yeah. So if you go ask any engineers that’s in the fan group or thinking about starting an organization, or any engineers that might be having an interest in going into tech, and you tell them, “Hey, stack rank all the companies that you want to work for,” usually the top of the gravel hold would be anything that’s related to deep tech. So, like, AI solutions, robotics, autonomy. And then it would go into anything that’s product or engineering tech, like DevOps tools or product-related things like Asana or a Jira type of toolkit, or infrastructure players like AWS, GCP, Azure type of organizations.

And then you go way, way, way, way down on the totem pole. And then legal tech would actually exist there. So there’s a lot of abundance of wrong tech that is being developed by not so great fundamentals that you would actually perceive in modern day technology. And as a result, the actual amount of tech that gets induced into legal tech has been fairly shallow.

That’s what we believe, like, has historical origins in tech. A lot of times there’s the cyclical solutions of what would be considered as a point solution software versus, like, a holistic bundling. So if you look at how Microsoft existed in the late 90s to early 2000s, there was a bunch of bundling process that went on. And now we’re in the era of unbundling.

So it’s like there’s a bunch of these point software solutions, and our thesis is, like, more of the bespoken software origin will go into this bundling phase again, just, like, the cycle kind of wraps through. So what Traact does is, if you think about what all the actual in-house or legal firms’ work that they do for these corporate clients is, whether it’s an entity management, license or permits, regulatory board, dispute and litigation, contracts, or general governance, these are all different point solution software that’s there, and lawyers just tend to not want to jump off and jump into, like, six, seven different platforms to actually get their work done. So what we do is we bundle that all together and provide it as a single tech solution for any outside law firms or an in-house legal team.

Bim: And you’re right. Like, I think one user experience definitely makes a big difference when it comes to user adoption of these kind of products, and just makes it a more seamless experience between different elements of the workday for a lawyer. So that’s really, really good to hear. In terms of just a couple of other pieces of information that might be useful for our audience, so the—I presume this is a cloud-based solution that’s delivered using a SaaS model?

Steven: Yes, correct. It’s a cloud-based solution, and then you can pretty much work at any given jurisdiction that you’ll actually operate in. And the data is compliant to GDPR and any of the local hosting mechanism requirements from a security perspective as well.

Bim: Fantastic! Okay, good, good, good. So just moving on a little bit, I do want to kind of dig a little bit deeper into the world of AI as it’s such a buzz topic at the moment, and very relevant in the legal industry in particular in terms of the plethora of products that have been launched or extended to have AI capability. And for a lot of customers that we talk to, there’s a lot of confusion and general kind of misconception in terms of what AI can actually bring to the table.

And it would be really interesting to get a little bit of a review from you on this. I took a look at your AI guidebook, which by the way is a really interesting read. So, for anyone listening here, we’ll include a link to it so that you guys can take a look. But you talk about some of the applications of AI historically and some of the things that, you know, really were successful and some of the big failures that were out there as well. I’d love for you to kind of walk us through some of those use cases because they make for compelling listening, I believe.

Steven: Yeah. So what a lot of the hype cycle goes through—this is the third wave of any kind of AI hype cycle that I consider. And I lived through two of those already, and this is, like, the third one. The first one was the machine learning hype. That was right before the 2014, 2015 era, where machine learning was starting to come in.

And the history behind that is—really simplified—it’s just, algorithms were good to provide a particular output to a problem, which is very unidirectional. And machine learning was trying to prove that in a scale based solution. So it’s, like, “Okay, if I could do one thing for this, why can’t I adopt the same algorithm for other solution sets?” And it’s trying to be not as technical as possible so it’s easy for the audience to actually absorb the information. That essentially is all, like, machine learning. And then, a lot of money went into it and a lot of applications didn’t really play out at the end.

And deep learning was the second phase, which is, like, “Okay, now we can create multiple algorithm sets.” It’s like, “Can we predict what the next stage is going to be?”

And now we’re living in this third phase, which is generative AI. So it’s, like, “Okay, now we can predict. Can we predict accurately enough that it actually sufices information that’s relevant to the actual end user?”

And traditionally, in all those phases, billions of dollars has gone into it. The winners in most of those markets has been the infrastructure players, not actually the AI application companies. So if you think about how many AI companies have become a unicorn, and how many of those are application layers, you will find out there’s none. And if any, their revenue stream is very shallow because every dollar you put into the AI space, 80% of that money goes into the infrastructure players like, AWS, GCP, labeling companies—like the emphasis to scale AI and so on—and the other players that kind of funnel and help you build AI tools.

And the key reason for that is, if you look at the application layer, there’s things that AI can do really well and there’s things that AI can’t do so well. The things that AI can do well is, like, whenever there’s repetitiveness in that solution set—and it has to be controlled as well—it’s actually really good at applying AI solutions to those applications.

A good example of that is mapping algorithms, or if you’re trying to do warehouse robots, right? Those tend to be fairly good, repetitive, controlled environment types of solutions. But any application that has very high uncertainty that requires creativity—and this is why I’m fairly bearish about the generative AI wave—it didn’t really deliver to its expectation.

Some good examples of that is, like, in 2017, 2018, everybody was talking about self-driving cars and how that was going to revolutionize the way that we actually drive through. Well, it didn’t really play out, but that same technology in the warehouse robot space worked out really well because self-driving cars, in the real world—the rule says that you can’t have jaywalkers but that happens all the time, whether it’s in London or New York or whatever city they’re at. You can’t predict the behavior of the driver who’s driving next to you. So if you consider all that, it has to be very creative and there’s impulsive high uncertainty. And there’s so many edge cases that it just doesn’t really perform better than how a human’s response time tends to be.

So that’s kind of the gauge. And so, if you want to see AI success, it’s just, like, two simple questions that you have to ask: Is it repetitive, meaning it’s a lot of administrative, simple tasks that’s going back and forth? And is the environment controlled? And if it’s not, then most likely it’s going to fail.

And that’s why I don’t think lawyers have, you know, that many use cases where AI could actually supplement to replace their work. Because a lot of the work that the legal organizations do is very bespoke and customized to a specific use case.

Bim: Yeah. Yeah. Absolutely. In terms … Those are great examples by the way, and really, really helpful to understand. So in terms of where we are on the AI journey at the moment and where you see it going in future, how do you think that evolves, particularly for the legal world? Like, do you see some application to law firms that will make sense in future?

Like, there’s a bunch of stuff happening at the moment with regard to more standard contract creation and all that kind of stuff. Right? But like you say, it’s not really going to replace the element of what a lot of the work is, which is bespoke elements of it. But for—certainly for some of the lower level elements of what might need to happen that are a bit more standardized, we can see some use cases of that happening. But where do you see it going in the future is what I’m really interested in learning?

Steven: Yeah. So I think it really depends. I think it depends on how much money does the law firm have to actually adopt an AI solution, and what’s the ROI, and what’s the objective that you’re trying to retain out of this AI solution? I think 90% of the use cases that people talk about today about AI in legal is probably not going to pan out so well. They’re going to implement and they’re going to realize it actually didn’t come to fruition the way that they expected because there’s a lot of optimization and there’s a cost to each of the optimizations.

So, for example, if you’re trying to build an on-prem solution with an LLM system that’s pertaining to, like, review and specificity to that particular domain, you’re probably going to spend close to a quarter million dollars to get it to perfection. And it’s going to take the same amount of energy to actually maintain it. So how many law firms in the world actually have that kind of budget to actually maintain a system and get the actual output that they actually result to. And it’s actually very simple, and this is a topic that I think a lot of legal folks don’t ask is, “Okay, like …” If you go to any salesperson, they’re going to say, “Hey, we have AI. We have the solution. This is great!”

But the constructive question you need to ask is “Okay, what’s the accuracy of this AI solution?” Which, in engineering terms we call it precision and recall. Precision meaning—the simplistic way to explain this is, like, if I throw a dart in an actual target, where it hit in the bullseye location is considered precision. So if I hit a 10, that’s pretty precise. If I hit a seven, not so precise, right? If I go out of the room, that’s really bad!

novaplex

And recall means how many times that you can repeat that motion over and over. So it’s like if I’m throwing a dart 10 times, am I hitting on the bullseye each time? That’s very high accuracy and a very high recall score. And that constructs a pretty good AI model.

The issue with most companies is when they measure precision and recall, because they can’t take over all the edge cases, they have a golden set that they test it against. And then those scores tend to be always higher because you’re optimizing against this golden set over what the real world example is.

So whenever a company comes out and says, “Hey, we have, like, 90% accuracy, 95% accuracy,” that really means that they tested against a golden set, not based on real world examples. So now rhetorically, if you reverse that question—if you have 90% accuracy, that means 10% of time this model is getting it incorrectly.

Can you as a law firm bet on something that is 10% inaccurate. And what’s the risk of accepting that? That’s the first question I think every legal professional needs to think about.

Second—outside of the cost issue—the second thing is, now, if it’s 10% inaccurate, have you thought about attorney–client privilege issues and the privacy issues that go with it? Because can you use other clients’ data to train your model? And then wherever this information is coming from, because the model had to be trained from a vast group of information through a foundation model, is this going to be acceptable widely that your client’s information is being suppressed to actually train in another system?

Those are the two major blockers I see. One is the risk factor at the risk of exposure of not being so accurate, right? And I think most of the models are probably at accuracy level—real world accuracy level—at 70%, which is much lower than what people claim at 90% level. And then the second attribute to this is the privacy of the regulatory concerns.

And this is why AI engineers have been consistently going to the regulatory to actually regulate us because there is a huge concern around that piece.

Bim: Yes, indeed. I was just reading today about the EU AI Act, which I believe is going through the European Parliament at the moment. And, actually, one of the things I wanted to ask you about is that that is obviously the start of some level of legislation, obviously, like for a specific region, but ultimately will—eventually, I’m sure—be adopted in other places in their own form.

What does that mean in terms of the impact of the rollout of some of these AI solutions? Because, as with GDPR, right, there’s certain things that kind of forced companies to do differently. What impacts do you see from that perspective? And what should people be thinking about with regard to legislation that will be most likely, you know, becoming laws very soon?

Steven: Yeah. So, parts that I’m more excited about are how does information get consumed by a lawyer in a legal tech space, right? And how can it get organized much quicker? So those are considered repetitive work usually a legal administrative assistant used to do, or lawyers just hate it to do.

If you think about how GDPR came about, there were privacy concerns that actually kind of jumped through the loops. Like, in the state of California, the same privacy act was passed to actually protect the needs of the consumers, like data. So, in the short run, it’s probably going to hold back in terms of any kind of technical implementation. But in the long run, I do believe this is going to be much better in terms of the process and what we actually get to consume over the long run.

A good example of this is, like, if you think about just five years ago. If you had a cell phone, there were, like, 40 different dongles that you could use to actually charge your phone, from USB-A to a lightning cable to a plugin to various different types of, you know, even the 12 volt C. And now everything is getting standardized to USB-C mode because it just makes it a lot more eco-friendly. And it took us a while to get there, and a lot of manufacturers had to adopt to that thing because the EU demanded everything has to be standard to USB-C.

But in terms of the gain, now you have one plugin that you could plug into every single one of the solution sets, right? It’s the same thing in the AI solution right now. It’s, like, everybody has various different ideas, various different concepts, various different application layers. And it will slow down the actual adoption in the short run. But in the long run, I believe it’s a much more healthy ecosystem to have some kind of regulatory to meet some needs, and especially what kind of information that the AI could actually produce.

Bim: Yeah. Yeah. Totally makes sense. And just to touch a little bit—I think a lot of the stuff that’s coming out of these new legislations is to obviously add an element of control around what’s happening in terms of the AI applications and that—some of that—has come from fear, right?

In terms of what does it do to jobs, job security in terms of replacing jobs, and, particularly around the element of self-learning, right? So the ability of things like auto GPT and being able to kind of self-learn and get better, right? Tell us a little bit about that. Can AI self-learn? Is it going to get to a point where it’s, you know, cleverer than a human being? Like, tell us a little bit about what the reality is with regard to that.

Steven: Yeah. So like I mentioned earlier, AIs are really good at two things: repetitive things and controlled, information that is a fact and that’s repetitive work. AI is going to be much better than … Like, computers in general are going to be much better than humans, right? So is it going to replace a very sophisticated bespoke job? Most likely not, right?

So, there’s a lot of articles going out and saying, like, “Oh, this is going to replace a lawyer’s job, doctor’s job, statisticians’ jobs,” and so on and so forth. It’s going to induce people to get their work done much more quickly, but in the process, it’s going to create a lot more jobs as well that’s going to be inducing in this field.

But is it going to completely eliminate someone’s job as, like, a zeros and ones type of black and white type of structure? Most likely not. And we’ve seen this in the computer wave as well. Like a lot of the work that was done before the computer actually got introduced to the humans, or smartphones did.

Yeah. Obviously it eliminated a couple repetitive jobs that people were doing. Over three days, they shortened the period for a day, so you didn’t have to hire that many people. But in reverse, a lot of jobs were created in the space of the computer and the IT industry that had x multiple effects there.

If you think about what goes into creating AI … Like, so if you think about back in the day—this is probably not the best analogy—but, like, if you look at how manufacturing and the clothing apparel industry worked in fast fashion, it was a bunch of people in Sri Lanka or India or Bangladesh that were just creating clothing as, like, a sweatshop. Well, did that industry go away? That’s repetitive. That’s controlled.

Unfortunately, it’s still there, right? But there’s also a modern day sweatshop to make AI work, and that’s considered labelers. So if you think about how you consume Google search, Google Maps, or Yelp, or any kind of other technologies solutions, or like false news or fake news people call it on Facebook, there’s people abroad, millions of millions of people, that’s getting paid $3, $4 an hour just meta tagging every single thing for you.

And that’s a modern day sweatshop, but that’s kind of needed to build out a sophisticated AI solution system, right? So is it going to remove the jobs currently that exist that’s inefficient? Probably. Does that mean are these people trainable to do new work in this new era that’s working smarter? Absolutely! So it’s not going to be a situation where it’s going to completely eliminate jobs. It’s probably going to funnel and induce additional jobs in the long run that’s higher quality, I believe.

Bim: Yes, indeed. And just like when the, the typewriter was no more, the human race will evolve to embrace AI in a different way. So yeah, I totally agree. So, I’m interested to hear what you are excited about on the AI side of things. Like, what applications are you seeing and hearing of that are really getting you excited about this evolution that’s happening at the moment?

Steven: Yeah. So, parts that I’m more excited about are how does information get consumed by a lawyer in a legal tech space, right? And how can it get organized much quicker? So those are considered repetitive work usually a legal administrative assistant used to do, or lawyers just hate it to do.

So let me give you specific examples. In the traditional legal days, there’s a file folder of client files that’s getting produced, and then it’s provided to the legal admin or a paralegal to say, “Hey, go organize this thing.” That’s really, really inefficient if you think about it. You’re paying somebody billable hours of upwards of what, $100 to $200 to get just file folder structure organizations and stuff.

That doesn’t really require a human touch to it as much as people think that it needs to be. That could be completely automated. Not completely automated, but 80% automated. And then you have a human in the loop just to actually check through. This is going to enhance the level of how quickly legal services and the quality of the legal service could increase because what you could actually do is, this time that you’ve been spending doing administrative tasks, now just transform it into doing work—and more strategic work.

If you think about, like, a lawyer’s day, 80% of their day lives in emails and Word documents, reading documents. Imagine if the emails could be sorted through, come up with action items, and highlight the points that say, “Hey, this is the thing that you need to actually put your attention to.” And the same thing with the documents. That just gave a huge boost to the actual legal team to think about what the energy and the effort and the research that they need to do that’s more bespoke.

And then the quality of legal service that’s going to be provided to the actual end consumer is going to be also much more satisfactory for the same billable hours from an outside firm perspective. From an in-house perspective, a lot of things that people miss is they miss in the space of just document management.

Like, if you go to any legal team’s document management solution, it’s completely bombarded with junk that has been created from, like, 10 years ago, or things that’s not pertinent to this particular document. And people spend hours and millions of dollars is trying to organize that. That could probably be automated in a lot of ways in the legal tech space where AI could get induced.

Now the question is AI is not perfect, so can it give the right confidence score to the actual end user, saying, like, “Hey, we did this work for you. I’m not perfect.” Just like a legal admin who’s doing that work would not be perfect, and they would be asking the lawyer or whoever wants to see the output to be done. Can it really tell the human, “These are the things that we’ve done. This is the piece that needs your attention. Therefore, let us know how we could actually get better from a reinforcement learning perspective.”

That’s the part that I’m actually fairly bullish about in terms of how AI could be adopted into the firm or the in-house teams, like a solution set. Everything else around “Hey, we’re going to be able to generate a contract.” “We’re going to be able to write letters for you.” We’re still very far away from that.

Bim: Good to hear your thoughts on that. So if I put myself into the shoes of a law firm today, or a legal department today, and they’ve been given the task of going to look for, you know, products that can bring such efficiencies by leveraging AI in some shape or form. You kind of mentioned understanding the cost return on investment, right, of a product that may even have some element of AI involved. Are there any other tips that you could share with the audience in terms of how you truly go about evaluating whether it’s going to be an effective solution? And then, how do you kind of, you know, keep an eye on it to make sure that it’s still delivering value?

Steven: Yeah, I think I would say one thing: Don’t be the first market mover. You’re going to be spending all the money helping out the benefit of the fast follower in a situation.

Let me give you a very specific example that I think would resonate with a lot of people. Amazon launched a product four years ago called Amazon Go. And Amazon Go was supposed to replace every cashier that’s out there. And then you just walk into the store, grab a thing, you leave. On the Amazon app, it shows, like, “Hey, you picked up this item.” And then it sends the actual receipt after, like, 30 seconds after you walk out of the store.

So now let’s talk about what the purpose of this was. From a US perspective, you’re trying to replace somebody who makes $20 an hour on labor—in the European standard, that might be a little bit less. But that was the objective: to replace cashiers, The issue, like I mentioned with AI, is it’s very costly to maintain. What Amazon learned throughout the process of implementing this was, well, the cost of barrier of entry. The objective was to replace $20 per hour labor, but ended up costing $50 an hour just to maintain it.

So from a cost modeling perspective, it failed. You know what? The alternative of that bet was organizations like Walmart and all these, like Costco, and all these other major retail stores. You probably went to a bunch of these stores and you’ve seen the self-checkout.

It achieves the same thing. It still needs the customer to do it, but you know, you could produce … Instead of having one cashier deal with multiple customers, now you could have multiple customers doing their own self-checkout. And then you still objectively remove the same $20 per hour labor with a simple, you know, a station, in a way.

Bim: Yeah

Steven: That’s a good definition of an AI-based solution versus a heuristic algorithm-based solution. And at the end—four years, fast forward—Amazon Go is now shutting down their operation one by one because of business objective it didn’t meet. Secondly, it achieved the same thing. And the behavior was it actually did get the people out of the store much quicker through a self-checkout stream of work.

So, one was with AI, one was with no AI. I think it’s too early for most legal tech companies, legal firms, and legal organizations to really understand where the right AI impact is going to be actually induced to. And people blindly just jumping in to a solution set based on what the salespeople are telling them is probably going to backfire just like an Amazon Go situation.

So if you do decide to adopt it, make sure the three steps are actually, like, fast follow through. One, make sure your data storage is very clean, the document management, because wrong data—garbage in garbage gets out with the AI solution. Secondly, have some form of work automation and analytics that you can actually see how much impact the actual AI has proven to your organization.

Bim: Yeah.

Steven: That’s a good definition of an AI-based solution versus a heuristic algorithm-based solution. And at the end—four years, fast forward—Amazon Go is now shutting down their operation one by one because of business objective it didn’t meet. Secondly, it achieved the same thing. And the behavior was it actually did get the people out of the store much quicker through a self-checkout stream of work.

So, one was with AI, one was with no AI. I think it’s too early for most legal tech companies, legal firms, and legal organizations to really understand where the right AI impact is going to be actually induced to. And people blindly just jumping in to a solution set based on what the salespeople are telling them is probably going to backfire just like an Amazon Go situation.

So if you do decide to adopt it, make sure the three steps are actually, like, fast follow through. One, make sure your data storage is very clean, the document management, because wrong data—garbage in garbage gets out with the AI solution. Secondly, have some form of work automation and analytics that you can actually see how much impact the actual AI has proven to your organization.

Without these two foundations, it doesn’t matter what AI solution you adopt, you’re most likely going to fail. At the earliest, I would presume it’s going to take four or five years before you’re going to be able to see any kind of fruition of AI adoption into the legal tech space. So if you have the same budget, my recommendation is start working on how do you get the data storage a lot more cleaned up because that’s going to be the foundation of how good your AI performance metrics actually perform over time.

Bim: Yeah, that’s a very good point. And the Amazon Go story is actually really, really pertinent. Are there further challenges with regard to the pace of change in this area as well? Because, you know, it’s a very recent thing but pretty much every day I read a new model being produced, right? A new large language model being produced. Does that also play into this? And does that add an element of risk in terms of—I go out and I buy a solution today that has, you know, some dependency on open AI’s language model, and then another player comes out with a better version? Am I then out of date? What are your thoughts on that?

Steven: Yeah, so, I mean whenever an AI company releases a general statement, it’s good publicity for them to actually sell their product through. What people don’t tell in that PR outbound marketing is, like, what does this not do? Right? Because nobody wants to show their vulnerability. So I think it’s important to note that people ask smart questions to these folks, meaning, “Hey, you achieved this with your LLM. But it’s, like, how do you compare in this language model when it’s trying to adopt a particular solution set? Can we test this? Can we go through a process to look through?”

These are hard questions, I think, sophisticated engineers could ask. But it’s really hard for any lawyer to ask unless they have done kind of like that background research or have actually implemented solutions before.

So the question is how many law firms and in-house legal teams actually created real AI solutions into their current workflows, and how many have done it correctly? I would assume there’s close to zero. That’s the issue around, like—it’s not about being outdated because somebody else came up with a new technology, a new solution; new technology, new solution comes out every day. It’s about, “Is this solution the right application for our specific use case or not?” I think that’s much more important to actually consider.

Bim: Thank you very much for that. So it’s been really insightful talking on the subject of AI. I do want to switch gears a little bit and kind of take us to the last segment of the show and talk a little bit more about yourself. So I have a couple of wrap-up questions that I want to ask and just dig into with you.

So my first is, if you could borrow Dr. Who’s time machine and go back to Steven at 18 years old, what advice would you give him?

Steven: I think one thing I did realize over my career is deep tech is really, really difficult. So every deep tech solution that I’ve touched, either it always goes over budget or the actual adoption tends to be really slow. So, if you want to make change in any kind of process, it’s always better to make incremental, small changes on a daily basis than make a vast change that’s, like, completely paradigm shifting.

It doesn’t really happen that often. If I would go back in time, I would probably not go into deep tech knowing what I know today. I would probably go into a much shallower, quicker operation, and product-led solution products is kind of what I would be focusing on.

Bim: Okay, great, great answer. My next question is around driverless vehicles. So the big question is, do you have a car that has autopilot and do you use it?

Steven: Um, I don’t. And if there was a fund that would bet against self-driving cars in the next foreseeable 5 to 10 years, I would put all my wealth into it. Knowing what goes into that system, it’s just not theoretically possible unless regulatory change happens, like China, where you have a whole city that’s dedicated to just self-driving cars, yes. Then it just might work. But not the way it’s being done. Potentially in self-driving trucks because it’s a much easier use case to actually build through. But no. I don’t trust the autopilot system where it is, and I would not put my family in it.

Bim: Good. Good to know. So I don’t know if you’re a social media guy on Twitter or any of the other platforms, but Meta’s recently announced that they’re launching Threads, a Twitter competitor. I’m wondering if you will be signing up to Threads.

Steven: So I’m not a huge social media person. I don’t usually use LinkedIn that often, I used to be a Facebook user when I was way back in school. But since then, I haven’t really used it actively. I don’t have a Twitter account. I try to minimize my digital footprint as much as possible, so probably not so much. But it seemed like a fairly interesting concept. I just wonder how that’s going to play out in the modern days of social media schemes.

Bim: You’re probably freeing up a lot of time to be more productive by not doing, you know, the endless scrolling.

Steven: Yes, yes, yes. That’s, that’s for sure.

Bim: Yes, indeed, indeed. Very good. Any closing thoughts or advice that would benefit the legal professionals in our audience?

Steven: Yeah. One thing I would actually share out is, like, in legal tech, I think a lot of things that legal tech companies need to do to really make a difference is—there’s two things I learned being in legal tech for a bit, and that is the barrier of entry into what’s going into the market versus what kind of services can you actually provide as a legal tech organization to the actual end customer and the end client as well.

So if you look at why the ecosystem for legal tech for engineers is so shallow compared to some of the other fields, it’s just the behavior of how legal tends to run. So if you go back 10 years ago, when Google and Facebook and, you know, the mega fans were really hot, and you would go ask the Goldman Sachs CEO or Jamie Dimer from JPMorgan Chase, right, “Who’s your biggest competitor right now for talent?” You would expect, they would say, “Oh, it’s Citi, or it’s Wells Fargo, or Barclays, or some of the other banks.” But what they said was, like, “Our biggest competitor is Facebook and Google.” Because people who tend to want to work for the big banks are starting to work for tech companies, and they’re kind of competing against the same resources.

Legal tech or the legal industry was never that, you know, hot, sexy type of tech solution provider. Because of how legal tends to behaviorally adopt solutions, they tend to be slow. It’s very relationship driven and it’s also a very wine and dine type of solution bet, right?

I think what people need to realize is, like, unless you start breaking this ecosystem to be much more adoption friendly, you’re never going to see adequately good talent coming into legal tech to actually suffice the level of legal tech getting to a certain level. Like, I don’t know how many people that I’ve spoken with that say, “Oh, legal tech really sucks. Like, there’s no solution that retrofits me.” Well, yes because you guys created a wall and there’s a blockage of people trying to actually enter this market.

And it comes to—as a legal tech person, there’s two things that you need to look at. One is how standardized is the process and how big is the market? Legal is a fairly big market, but it’s very bespoken. It’s very closed, so it doesn’t give you a barrier of entry and breathing room for tech companies to come in and actually, you know, revolutionize the way that you work and build a solution that you’re kind of needing to. And I think this: The quicker you make this adoption change, you’re going to see a lot more good.

You know, companies kind of pop up in the legal tech space. But right now, there’s definitely some resistance across the board. And as a result, there’s not that many companies that you actually enjoy and vice versa. So both parties are kind of complaining, right? One party is saying, “I don’t see a good solution,” and the other party is saying, “Well, they’re not being so fluidic to actually adopt.”

So hopefully over time this could get changed and then you’ll see the same kind of change that happened in the HR industry, right? And the FinTech industry, they were also very close but now there’s a lot of good talent going into that space trying to help assist those operators with the tech solution.

Bim: That’s a really good point. And yeah, it would be very interesting to see if that dynamic changes over the years ahead because that would be a transformation, really, for the legal industry. So, yeah. Good thoughts on that.

Steven, it has been fascinating talking to you today. I really appreciate you taking time out to talk to us on the show. If people do want to get in touch, what’s the best way to contact you?

Steven: Yeah. So my email is probably one of the quickest ways to get ahold of me, which is just steven@traact.com. Happy to talk about whether it’s tech or AI solutions that you guys are trying to adopt into your thing, or just network. Since I’m not a social media person, you probably won’t be able to find me so much there. But yeah, feel free to reach out and happy to discuss things for you.

Bim: Fantastic! Thank you again, Steven.

Steven: Yep. Thanks Bim for having me here today. It was a pleasure.

Helm360 is a full-service legal tech provider specialising in BI, chatbots, application managed services, and cloud hosting solutions.