Navigating legal tech implementation challenges with Helm360

Join our host, Bim Dave, as we share a treasure trove of proven strategies to navigate the top three pain points that often derail even the best of breed solutions – data conversion, testing, and user adoption. In today’s fast-paced legal landscape, staying ahead means mastering the art of tech adoption. Bim spent years dissecting the complexities of legal tech, and he’s here to distill his wisdom just for you. As we explore these critical aspects of implementation, you’ll gain invaluable insights to steer your projects toward success.

Your host

Bim Dave is Helm360’s Executive Vice President. With 15+ years in the legal industry, his keen understanding of how law firms and lawyers use technology has propelled Helm360 to the industry’s forefront. A technical expert with a penchant for developing solutions that improve business systems and user experience, Bim has a knack for bringing high quality IT architects and developers together to create innovative, useable solutions to the legal arena.

Transcript

Hi everyone, and welcome back to another episode of The Legal Helm. In today’s episode, I’m going to be talking about navigating legal tech implementation challenges, and specifically talking about some proven strategies to overcome some of the top three pain points that I’ve seen over the years when implementing some of the best of breed solutions out there.

Before we dig into the show, I’d just like to thank everyone who is subscribed to the channel—really appreciate you taking the time to do that. If you haven’t subscribed so far and you enjoy the episode, I humbly request that you take a moment to like the episode and hit the subscribe button. It really does help us produce more great content for you to listen to.

So, let’s dive right into it. Over the years, I’ve spent a lot of time working in different aspects of the implementation process. So I’ve worked in a support team where we kind of saw a lot of the challenges that happened during an implementation that led to a derailment of hitting a timeline for an implementation project. And a lot of that was down to varying aspects of decisions that were made, the process that was followed, and some of the areas really related to some of the ways that data was migrated, right? And really, I would kind of sum up the issues that I saw across multiple customers over the years implementing different types of products, but typically, you know, law firm ERP solutions, into three key areas, right?

So, there’s three pain points. The first around data conversion; the second around testing, user acceptance testing, as well as performance testing; and the third around user adoption of the product, training, support, all of the user-focused aspects of using the product once you get to the point of testing and go-live. Those are the three main ones, right?

And I’m going to talk a little bit about each of these in the context of what the challenges were and what the challenges are when implementing solutions like this. And also, some of the ways you can kind of mitigate that risk to be able to really do a good job of returning value on your investment as fast as possible without causing the pain that’s associated with some of these areas of implementation.

So, starting with data conversion, a lot of the times when we’re implementing a new practice management solution, what we end up doing is we end up taking data from our legacy solution, which could be a solution that’s been up and running for many, many years—typically 10+ years, right? And then we’re making a decision to take that data out, migrate that data into a new solution, like Elite 3E or like ProLaw. Whatever the system might be.

Now, in the in the journey of moving that data, there’s a few key components that often cause a number of challenges, right? One is the quality of the data, another is the integrity of the data, and the other is just how the data is formed in the old system, right? And what I’ve seen time and time again is that unless there is enough of a runway to be able to make sure that the data is in a good, clean, consistent state before we start the process of migration to a new product, it can cause all sorts of issues downstream.

The main reason being that once you push that data into a new product and you start your testing phase, if the data hasn’t come across correctly, or it doesn’t balance, or you see other issues in terms of the way that the data has been mapped, what ends up happening is it can easily derail the next phase of the implementation process, which is typically, you know, when you’re doing your user acceptance testing, because most of the times the people that are using the product for the first time are going to be the people that are going to be the most impacted by some of the negativity around seeing their data in such a bad state.

So, data preparation is really, really key, right? Really understanding where your data may have some flaws, where there may be challenges around data, and how we can actually resolve those way ahead of time ensure that we are in a position to be able to migrate that data much more smoothly and easily without having to do lots of rework cycles.

One of the reasons for an elongated implementation time is simply because if you don’t address those issues up front, what will happen is that, if you imagine the source system data will be extracted from, it’ll be pushed into a staging schema (like a temporary area) where that data is then, kind of, massaged so that it then fits the new schema of the of the new system that you’ve purchased. And then, that data is pushed in.

Now, once we start to identify that there’s some issues with that data in the new system, we then end up going into a rework cycle. And that rework cycle can be quite painful because you don’t know what you don’t know until you’ve really gone through a full execution cycle of testing.

So what can typically happen is that you’re not limited to just one rework cycle; you tend to have lots of rework cycles, which then ultimately elongates the project to a point where you’re spending more money because you’re taking more resources away from the clients’ day jobs or contractor resources to help you test and to validate that the system is actually doing what it’s intended to do from a pure data perspective, which can lead to a lot of frustration, a lot of issues around exceeding budget on a project, and just the general impact of the pain of having to go through repeated tests, right?

So, the way to, kind of, solve some of those pain points—and just to be clear, it’s very near impossible to get a completely clean conversion first time round, right? So you’ve got to be walking into a conversion process to a new system knowing that there’s going to be some rework that needs to happen. The level of rework is really what we’re talking about here. So, we want to minimize how much rework that we might need to do.

Data preparation is really, really key. So firstly, understanding the integrity of your legacy system is very, very important. So, ask yourselves the questions, “Do you regularly balance your system?” “Do you have proven balancing reports that show that not only do you balance the system but also balance to your subledger as well?” Because that can also have a big impact downstream in terms of identifying differences that could be causing imbalances.

Because normally when you do a conversion, the expectation should be that you’re going to have, at best case, whatever differences you have in your source system, your legacy system will be translated into your destination system. So, if there’s an opportunity to solve any of those balancing issues ahead of time, this is the time to do it, right? So, then you have a clean starting point in your new system.

The other area is around, really, the integrity of data. A lot of the times when we’ve approached a data extract for a customer, what we’ve realized is that a lot of the data was, maybe, inserted into the system over the years—and if you imagine, a lot of law firms go through mergers and acquisitions, which mean that new data is added to the system that was originally there. So, a lot of the time, what we find is that the legacy data also contains a lot of corrupt data, right? It’s invalid data that wasn’t really properly checked, and doesn’t really have an impact in terms of the day to day operations because it’s very, very old.

But when it comes to moving all of your history over, it can cause issues because there are broken links, for example, between primary and foreign keys within the system, and you’re just not able to identify them until they get moved into something that is a bit more highly normalized, like, for example, an Elite 3E schema, right, where it is a highly normalized database.

Being able to run a level of integrity check against the system is going to be important. Now, there’s various ways to do that. There’s various tools available to do that. We have a tool called Digital Eye, which does exactly that. It kind of does a review of your data and gives you a nice report to kind of help you identify where there could be some issues. And so, you’re then in a good position to be able to go and make decisions on how to solve those issues or work with a vendor to be able to solve those for you.

The other area on the data conversion side is really around addresses, right? So, addresses pretty much across the board can be very painful in terms of being able to address those—no pun intended—to be able to correct those addresses, and go through the process of not only cleansing and normalizing address data, but also de-duping because a lot of the times, we have duplicate addresses.

And when you’re moving from some of the legacy systems, like, for example, we see a lot of customers moving from Elite Enterprise. And the way that the Enterprise schema works is that there’s no real defined field for the elements of an address. So, you kind of have an address line one, two, three, four, five, six, and you can plug whatever you want in those. So, there’s no real synergy in terms of understanding that, you know, address line five is always going to be the city, for example, or state. So, the only way to be able to really do that is to make some decisions, right, in terms of mapping decisions to say that we’re going to make some assumptions that, you know, this particular field is a city, this particular field is a state because that’s where we see most of the data distributed that way. Or you use some more advanced techniques, right, to be able to make that data better so that it can then be moved in a better state.

So again, one of the things that Digital Eye does is—it kind of does a few things. One is it parses a dataset that contains an address. So even if you parse in a string that is just an address string with no formatting, what it will do is it will pick out the components using some machine learning AI techniques to pick out the various elements of the data. And then take that data and put them into different fields so that you can clearly see what is a city, what is a state, what is a country, what is the, you know, first line of the address—all of the different components of it.

What it will then do is it will then go through a cleansing and normalization process. So where, for example, you may have inconsistency in terms of how the state has been put into the system. So maybe some are using the short abbreviation of “FL” for Florida, for example. But some are using the full naming convention of “Florida.” It can also go through and normalize those for you. So you can have consistency in terms of how that looks and feels within the system. Just making it look a lot better. And if you’re using it to print out on bills, et cetera, it just makes that whole experience a little cleaner.

It also does other things, like, for example, standardizing address formats and standardizing telephone number formats so that you’ve got a much better and more consistent way of looking at addresses, both from a physical address perspective, a telephone number perspective—and also email addresses to make sure that those are consistent and that they are valid, right? And that they are actual email addresses that could be used.

And then the final thing is really around the deduplication, right? Because one of the other big pain points is that you could have duplicate addresses in your legacy system that, kind of, connect to different entities. And when you’re moving to a new system, like, again Elite 3, which is a more entity-based structure, what the ideal solution is that you kind of simplify that so you have one address that’s associated with an entity that then flows through to other aspects like your clients, like your matters, like your timekeepers, et cetera. So, this allows you to then de-dupe those addresses, again using some AI techniques to make that happen in bulk, fast, and easy so that you can actually cleanse that data before you even get to the point of data conversion.

So, these are some of the methods that can be used to be able to take stock of what your data looks like, get a real feel for how good or bad it’s state is, and addresses one of those pain points in terms of being able to prepare your data in a way that makes that whole conversion cycle a lot, lot smoother.

The second area is really around testing. And when we think about testing, when I think about testing, we kind of break that into functional and performance testing. Functional testing is really the real-world usage scenarios of the product. So if we think about the data element and the data conversion admin being really purely focused on validating that your data has come across successfully, we now need to make sure that your data can be used in the new system from a functionality perspective so that, for example, you can take some of your existing bills and you can reverse some of those bills within the system based on that converted data, right—so following through the business logic of the application and making sure that all of your key business processes work in the context of the new application with your setups, with your data in place.

The other aspect of testing is around performance and performance testing. And fundamentally for on-prem solutions, really making sure that as you add more users to the system, that the system can scale up with you and that you don’t see a performance degradation in service.

novaplex

So when it comes to testing and some of the pain points that I’ve seen over the years with regard to testing, it really comes down to the amount of effort that it actually takes to be able to not just do one test—one end-to-end test of all of the different scenarios that you have—but it’s also then being able to consistently repeat those tests as you see changes throughout the project, right?

So, for example, if I do a full regression test—and by a regression test, I mean like a full end-to-end test that really factors in all of the aspects of the application, not just, like, one way of generating a bill, for example, because there may be different ways that you can generate that bill from an application perspective. You really want to be able to test all of the scenarios associated with that to ensure that it can do what it is intended to do from a product perspective.

And you want to document the output of that to make sure that whatever you have followed in terms of that process has been successful, and each of those steps have been successful, and that you’re testing for variations, again from a billing scenario perspective, that if there’s templates—and template options in particular that have been implemented—that you want to test a variety of different types of combinations of template options that may control, for example, how the time detail comes out in a template output, to make sure that it handles all of those scenarios successfully.

So, when you go through that cycle—that is quite a time intensive task as it is because if you think about it, you’ve got to go through every single aspect of the system, validate that it’s all doing what it was intended to do, document it along the way so you’ve got a point of truth to say that this was successful.

And if something goes wrong—so for example, you could have an issue with your data that causes a bug within the product to come to light, right? And in those kinds of scenarios, then there’s a kind of triage that needs to happen. So, we need to understand why that problem’s happening, identify the problem, fix the problem, and then repeat the test to make sure that you’ve actually solved the problem.

So, you’ve got to go through that testing process not just once but each time something is broken in that cycle. You’ve got to then go back and make sure that you retest it and make sure that it works. Now, typically this is—one of the biggest pain points is that it’s quite resource heavy, right?

You’ve got to be able to take the right people from the team that have an understanding of the business process, have an understanding of the way that you use the product, the way that the product works. Because in a lot of cases, clients are seeing these products for the first time, so they’re going on their own journey in terms of being able to train up on a product that they’ve never used before, understand how to use it, and then, obviously, identify issues along the way.

And that can be quite frustrating, right? I’ve seen many, many times where the teams that have been allocated for testing have been taken away from their day jobs—like there may be people coming from the billing department, from the AP department—and they are, ultimately, doing all of the tests, but they’re also losing confidence in the product because they’re seeing what are basically bugs.

And they’re having to repeat the cycle again and again, which is not something that they’re used to doing and that can cause a big erosion in confidence, which obviously has a negative impact on the project. And the worst case scenario, which I’ve seen many, many times as well, unfortunately, is that because of the load that’s being put on these individuals, what ends up happening is they miss things because they may test once thoroughly, but then in the second cycle, they may not, right?

Because they’re so frustrated at that point of finding a bug every time they go to test that they’re no longer in a state of mind where they’re going to really do a thorough job of testing. And that can lead to all sorts of issues when you move further downstream—especially at the point of go-live when you’re identifying these show-stopper issues after go-live simply because we did not have the right approach to testing to be able to do what you need to do to be able to identify those early and fix them.

Now, some of the bugs that can come up during the testing process are not just focused around data, right? It could also be functionality issues caused by the product itself. Like, you can have a bug in the actual products because a lot of the vendors that are delivering these solutions are testing with a dataset or a number of datasets. But it would be virtually impossible for them to be able to test with every client’s configuration because a lot of the products that are out there are highly configurable to make sure that they can be tailored to your use case. And it’s just not feasible or possible to be able to test all of those scenarios really well. And so sometimes you will experience, you know, bugs in products that need to be addressed by the vendor.

And again, same kind of principle, right? Like, normally the way that a vendor will produce a fix for a bug is going to be part of a patch or a service pack. And when that happens, you’re not just getting one fix for a problem, you’re getting a variety of fixes for a number of issues that you may not have encountered. Maybe another customer encountered them with their dataset, but ultimately, you’re getting changes to code in various areas in the application that fundamentally need to be retested, right? Because it could have a knock-on impact to other areas of testing.

So normally what ends up happening is that you could go through the cycle. You could find one bug, for example, in the billing module. You may then go to your vendor and say, “We found this bug, give us a fix for it.” And they say, “Yep, you can get the fix in the new service pack that’s coming out next week.” You apply the service pack—which actually touches lots of different modules, different fixes.

And then what ends up happening is it breaks some functionality in another area for you, which you’re not identifying because you’re only doing a unit test on that individual area within billing, which kind of means that, really and truly, you need to take a step back and actually do a full regression test, right? Which, again, is very time consuming, very frustrating because the thing that used to work now could potentially be broken. So you’ve got to go back to the beginning and do all of that testing again, which is pretty painful, to be honest.

And then the other area that can also be quite problematic is around customization, right? Customization and integration both because they also introduce new changes into the system that may not be there, and may not have been tested before by the vendor, and can also cause issues. And that can make things a little more complicated depending on who’s doing the customization because, typically, it will either be your in-house developers, or an external contractor developer, or the vendor’s developers that are building those custom solutions for you.

And as part of an implementation process, it means that you’ve got development cycles happening as well as testing cycles that are happening alongside each other. So, there’s lots of change happening, lots of tests, lots of repeated tests that need to happen to be able to make sure that you successfully validate that the product is working for you.

And again, all of that translates to lots of time, lots of resources that’s needed to be able to do that, some level of expertise in terms of understanding how to test within the product, and a lot of patience, right? Because you ought to do it again and again.

So, there are different ways to really solve this problem and do a good job of preparing yourself for the testing phase. And a lot of it is down to approach, right? Making sure that you’ve, kind of, invested a little bit in the starting of this and making sure you’ve identified the resources that are going to be used to then execute and help with the testing, and that you’ve got the right mindset when you go into testing and set expectations accordingly.

So, let’s start with the expectation side of things because I think this is probably one of the biggest ingredients. Because you have to prepare your team to make sure that they understand that, as they go into testing, they will find issues. If you don’t do that from the outset and we go in with a positive mindset that everything’s going to run smoothly and it’s going to be an amazing experience, we’ll be disappointed from the get-go. Because as soon as we run into an issue, we’re automatically just going to be—our confidence level is going to get eroded, and that’s going to continue to happen with each issue that comes along, right?

So, we have to set the expectation that it’s a brand-new implementation. It’s a new product. There’s a lot of change that’s happened as part of the product because even if you don’t change any of the product itself, your data is still different. So, there’s a big change just there in itself by migrating data into that system. So that can have a knock-on impact to the way the product functions and fundamentally will and can lead to issues with the way that it functions. So, we’ve got to prepare the testing teams to make sure that they understand that there’s going to be issues so that, psychologically, they’re ready for that to happen.

Secondly, we need to make sure we’ve got a really good process in place in terms of the issue resolution phase, issue tracking phase, and making sure that there’s a really good handle on how we’re tracking those issues and who’s responsible for them. Because like I said before, it can vary in terms of who is fundamentally responsible for producing the fix. But the responsibility and ownership of retesting the fix as a final port of call, at least, is going to be down to you as a customer because you need to make sure that you have validated that it works for you at the end of the day.

So, prepare your testing teams mentally and get them ready for the fact that this is what’s going to happen. Get the process nailed, documented, and communicated in terms of how issues are going to be managed in your issue tracking system, or in an issue tracking system, and that who is responsible for making sure that those tickets move forward, move along, right?

So, if it needs to go to a vendor, you need to have somebody who’s owning the responsibility—either a project manager or somebody who’s the lead for the issue management cycle—to make sure that the ownership is passed on to somebody and that that’s managed in terms of delivery times, et cetera. So that kind of forms part of your delivery cycle overall and doesn’t end up derailing other milestones that you’re looking to hit.

The other area that can really be helpful is just understanding how you use your system today, right? So, documenting your key business processes, if you don’t already have that done, and making sure that that’s distributed amongst the people that will be testing is a very valuable exercise. Because then we know what the nuances of your billing process would be. We understand any of the other aspects that need to be considered just from a pure business reasoning perspective, so that, then, we’re really focused on just pure functionality and not necessarily trying to learn how the business is operating.

We’re really focused on just the functions and features of the product working with our data and our process in place. So, documentation around the system is going to be really helpful. The other thing that I think is really, really helpful is then getting your best people, the people that really understand the areas that they’re working within.

So, if you’ve got a billing manager that really understands all of the aspects of how we build today as a client, having them involved in defining the tests that need to happen is going to be very, very helpful because, at the end of the day, if you’ve got your best people doing that part of the job, then execution can be handed off to somebody else.

It doesn’t necessarily need to be the same people executing the tests again and again, but they need to be involved in, at least, clarifying that the test definitions are correct and that they’re covering all of the areas that need to be covered. And that can mean both positive and negative testing. And by that, I mean, in most cases, we’re really focused by default just to focus on the fact that when I click, for example, the bill button, the bill’s going to get produced and it’s going to come out on my screen or my printer, depending on what I’m doing. But in some cases, you may want to prevent a bill from going out if it doesn’t meet a certain criteria, right?

So maybe there’s a workflow involved. And if the workflow is set up so that if the bill amount is over a million dollars, it has to go to another approval stage, to the managing partner, before it actually gets printed. So, from that perspective, you want to make sure that if you do do that scenario, that the product is preventing you from printing the bill or sending the bill before it’s supposed to be sent without going through that approval process, you’re testing that the product is preventing something from happening as well.

So those all need to be taken into consideration. And then it’s really about execution and how you execute them. And that’s where some more junior team members can be involved from an execution perspective because then they’re following a prescribed method of testing and documenting along the way. Or that it’s easier for you to then outsource to a third party to provide resources.

A lot of firms that we work with will want to use offshore or nearshore resources to be able to just do the execution part of their testing so that they can offload the responsibility of the repeated cycles to somebody else that can go and do that for them once they’ve been involved in the definition of what the test should look like. So then that’s another way of being able to, you know, lower the cost of testing for you. But also, really focus on your high value people that have the knowledge of the business process on other areas of the implementation, which is really, really key to make sure that that continues as planned.

And then the final area that can be very helpful, and one of the approaches that we take with our customers these days, is really around automated testing, right? So, how do you take all of that good stuff in terms of the test definitions and how products should work and remove the element of having to physically run through those tests again and again and again, and have an automated way of being able to do that click-button approach to be able to repeat a test?

And what we have with our, for example, our 3E automation tool is a framework whereby we ship a library of predefined tests that we’ve used over the years across many, many, global law firms, as well as national law firms, to help them validate the product that is 3E. And what we’ve basically done is added an automation layer that allows you to configure those tests to use your data. And ultimately, then, literally click a button and have it run through the process of generating a bill or running a bill through a workflow or posting a voucher.

Whatever it might be running through those fundamental tests that validate that the system’s good and automating the output of the processes that we’re actually following so that we can actually see that, as it’s running through the application automatically, that it’s taking screenshots as it goes, it’s validating that the output’s good. And if there’s any errors, it’s recording those outputs so that we can see exactly where the issues are.

So, what that does is it allows us to really compress the testing cycle because now that we’ve got one version of truth in terms of how the product should be tested, it doesn’t matter how many changes realistically happen throughout the cycle. It’s very easy for you to either perform a unit test just by clicking a button, or performing a full regression test by clicking a button and not having a human element needed to be able to run through the same sequence of events again.

That saves a huge amount of time. It removes that element of the confidence hit that can sometimes happen when we have to repeat a test again and again. Most importantly, it ends up being able to catch things that may not have been caught in terms of some of those positive and negative testing scenarios because the test scenarios themselves are more robust and really testing all features and functions of the product with your data and with your setups in place to enable it to be a much more confident outcome when you finish UAT so that you know that you’re in a good place to continue.

And just add to that on the performance side, the same principle. Like, I think at the end of the day, what we’ve seen time and time again with on-prem deployments of products like 3E and ProLaw is that the one thing that the vendor does not have any control over is the infrastructure that’s being provisioned, right?

And sometimes we can follow a product system requirement and think that we’ve got the perfect system, we’ve got the Ferrari of systems to run our solution on. But actually, it can take very simple things that can cause the performance to get derailed. We’ve seen that, again, time and time again. And little things, for example, like, putting the wrong disks in servers in terms of the speed of the disks not being fast enough, or having some contention on disk because you’re sharing disks across multiple systems, which may not be immediately evident. But ultimately, depending on how you’ve got your configuration set up, that’s quite a common pain point as well.

And even things like antivirus, right? Most customers will have different vendor antivirus solutions, different configurations, different exclusions set up, on-access scanning may not be set up. All of those things can have an impact in terms of performance, right? And it’s making sure that we understand what those impacts could be, what the best practice should be for setting up antivirus solutions. For example, how do we test this speed correctly to be able to make sure we mitigate some of that risk?

And most importantly is then being able to validate that the system can scale up. And the best way to do that is to, again, go down the automation route in an ideal world. Although that’s not the only solution, that’s one solution: to automate loads that mimic month-end load or year-end load situations so that you can scale up the number of virtual users that are hitting the system, performing those functions, running those metrics, running those bills, reversing bills, adding time, all of that kind of stuff.

And monitoring the system at the same time to make sure that you’re not seeing any contention on your application servers, your database servers, all elements of the stack so that you can prove that after go-live, when you put your 500 users, 600 users on the system, that the system’s going to stand up and you’re not going to see issues.

Another important performance consideration is around latency from different regions, right? So, if you have offices in the Far East, for example, that may be coming in over the WAN, they may have a different experience from a product perspective because, ultimately, there’s latency between the two regions where the servers are physically located. So, it’s important to validate what the impact of high performance and latency is going to have so that you have a real feel for whether that’s going to be acceptable or not.

Or whether you need to look at different options in terms of how you deploy the app—through, maybe, virtualization, for example, virtual desktops, for example—if the system is not really responding fast enough for those regions to be productive. So doing a performance and load test is a good way of being able to validate that.

But equally, a coordinated manual test can also be just as effective if you can get as many people as you can on the system at a coordinated time. Do the same level of measurement from a server-side perspective and have everyone follow through a specific test script routine and measure the impact of that over a sustained period of maybe one or two hours. That will still give you a bit of a feel as to how well the system is going to scale up to the level that you want it to if it’s possible to get your people on the system at the same time.

So that covers data conversion and testing. And the final area is really around user adoption training and just general support of users who are now using the system, whether that be from a UAT perspective or from a go-live state perspective, right?

Now, what we’ve seen, again, as one of the big pain points is that usually during a project, because there’s so much pressure on timeline and there’s so many moving parts of the project, so many decisions that people need to make—in terms of setups and how the system may be customized, how workflows may be built—what can often happen is that even though there may be some organized training sessions, our minds are not on the training at that point because, really, we’ve got our day jobs in the background, people hounding us for solutions on that side.

We’re being pulled in by the project team to make decisions on how the system design should be. And really, training is the last thing on the agenda in your mindset because you’ve got all these other things to do before you can get to the point where you can actually use the product, right?

So, in those scenarios, what we see is that during the implementation cycle, although you may absorb some of the stuff around how the product should be used, what will end up happening is a lot of the end users will kind of zone out a little bit, right? And they’ll take a little bit and then they’ll say, “Oh, don’t worry. I’ll figure it out once we get to the point where the product is in a bit more of a serious state. Like, we’ve got through a couple of UAT cycles, the bugs have been solved, the data issues have been solved, and then I’ll look at the product because then that’s the time that it will actually work fully.”

And the challenge with that is that the learning curve with some of these products is quite steep. And what tends to happen is that you’re really pushing a lot of pressure and ownership and responsibility to the support teams within your business and with the vendors themselves to enable those people to be able to be successful on go-live.

And the sad thing is that what ends up happening, and what we’ve seen time and time again with firms that we’ve worked with, is that this has a knock-on impact to the financials. Because what we’ve seen is that in a lot of cases, particularly in the billing area, the billing backlog just builds and builds and builds because the people that are doing the billing are not able to process as fast as they used to in the old system because they’re dealing with a brand-new system.

They may have used the other system for like 10+ years, as I mentioned. They’re now having to learn a new system. They may be having some issues with the system. There may be some performance issues. There may be just some understanding issues in terms of how to do certain functions within that system.

So as a net impact, what ends up happening is your billing cycle slows down. So, you get less bills out of the door on time. You don’t process fast enough, and that has an issue with collections, right? And that’s not a good scenario for anyone to be in. So, there’s a couple of ways to kind of mitigate this risk.

One is anticipate some of those issues that are inevitable, like the impact on the billing process, and get support during that period. So, there’s different ways to do that. One is to bring in additional resource. Maybe contractor resource, temporary resource to assist the billing team to go through that volume and build up that’s going to happen and that slowdown that’s going to happen so that you’ve got more people to process. That could be one approach, although it’s not foolproof because, ultimately, you need to get the right people to be able to help that have knowledge and experience of the product to be able to really hit the ground running and also understand your business processes.

But another alternative is to use a billing support provider, right? So, what we’ve been able to spin up for customers in the past is a billing support team that could be nearshore or offshore, that allows a low-cost entry point to have additional pairs of hands to help process faster. So, we can come in, do a little discovery, understand what your billing process looks like, and then we act as an extension of the team. And then that way you’ve got the right people, even if it’s for a short period of time, to make sure you don’t fall behind on your billing processes. And your billing cycle hits the mark, and you continue to see the value of the product without feeling the pain of delayed collections, right?

The other thing that can happen earlier on—so that’s one way to solve it if you leave things right till the end. If you want to solve some of that stuff earlier, my recommendation is to really invest in some of the ways that you can enable your users to understand the product quicker because sitting through a training session is one way of doing it, but it’s quite a time-consuming way of doing it. Another way is to kind of give them some training videos to watch. That’s another way of doing it. But again, quite a big, time consuming thing for people to have to go through. And not everyone learns well from a training scenario, not everyone learns well from a video training particularly. So, you may not get the biggest impact.

But what can be helpful is a shortened user guide or a quick start guide for a new product, with the billing process, or AP process, or whatever process it is defined for that particular type of user, right? So, if you’ve got somebody from your billing team, maybe what you do is you invest a little time in developing a quick start guide. And that quick start guide is really just covering the how-to for specific scenarios.

So, how do I generate a bill? These are the five steps that you need to follow. How do I change template options? These are the five steps you need to follow. How do I reverse a bill? Like, all of the things that they’re going to need to know for their particular role. So, then they’re not getting distracted with all of the other bells and whistles that the product can do, but they’re really just focused on, like, what they need to do for their day job. And they’ve got a way of being able to quickly reference it just by picking up the guide or looking at it on their machines so that they can really support themselves during that process.

And we’ve seen that really accelerate the process of them learning the product because, ultimately, they don’t need to ask anyone; they’ve got the information in front of them. And then, really, you’re only dealing with things that are true, true issues at the end of it if something is not functioning. So that can also have a big, big impact in terms of not just the impact to the billing process, et cetera, but also just the adoption at the user level of the new product and the happiness rating of your users using the new product, right?

Because it can be quite frustrating to change products that you have been using in your role for very, very long periods of time—for many, many years—to a brand-new product that’s quite new to you that you’re learning from scratch. And you’re having to make sure you hit your targets in terms of keeping the business moving forward. All of those things’ kind of lead to a perfect storm scenario unless you put these things in place to mitigate that risk.

So, there you have it: data conversion, testing, user adoption. These are the three key areas that we’ve seen time and time again cause implementation projects to get derailed. Hopefully, some of the tips that I’ve mentioned today are helpful. If you have others that you want to talk about, please do reach out for me. I love talking about this particular topic because I’ve been in it and living and breathing it for many, many years. And I’m always keen to learn other techniques, processes, solutions to be able to speed up the implementation process, mitigate the risk, and try and make it as happy as an experience as it can be for customers that have invested so much money in a new solution so that they can really enjoy the benefit of it and not go gray at the same time.

Thank you very much for listening as always. It’s been a pleasure talking to you guys and I’m really looking forward to the next episode. Bye for now.

We hope you enjoyed today’s episode of The Legal Helm. Thank you as always for listening. If you enjoyed the show, please like and subscribe on iTunes or wherever you get your podcasts. It really helps us out.

Helm360 is a full-service legal tech provider specialising in BI, chatbots, application managed services, and cloud hosting solutions.