Tara (00:50):
Thank you so much for tuning in to episode 65 of the Art of Estate Planning podcast. Today is a special episode where I am bringing in a subject matter expert for our topic all about using AI in your legal practise safely and effectively.
(01:11):
Now, you can't go online without hearing about the AI revolution, how we should all be using it, how people are using it to make money while they're sleeping. And if you don't get on board, you're falling behind like a dinosaur. But with this exciting opportunity comes great risk, especially for legal professionals where we are held to a higher standard of JU and ethics. So that is why I brought in one of the gurus in this space, Jennie Pakula. She is an expert in regulatory compliance advice for lawyers. She has been managing compliance and ethics for lawyers since the nineties. And in this episode, you will hear Jennie set us straight on how to use AI safely in your law firms whilst also being aware of the risks that we need to consider. Now, this is actually a presentation that Jennie delivered to the Art of Estate Planning Facebook group in our monthly free power hours. So I'll put in the show note, the YouTube link so you can watch the video if you want. There's also some q and a at the end that we haven't included in this episode, so check that out if you want to watch the conversation. But otherwise, I'll hand you over to Jennie and her insights.
Jennie (02:35):
Thank you so much, Tara. It's just such a great pleasure to be here with you. Thank you so much for inviting me and hello to everybody. I know that there's people out in the audience that are already friends of mine, but I hope I'll have some more lawyers who are friends with me soon. Yeah, so just to recap on Tara's welcome. I was in legal profession regulation in New South Wales and Victoria for about 30 years, read loads and loads of complaints and did all sorts of different things. And my last role was as manager of innovation and consumer engagement for my last five years at the Victorian Legal Services Board and commissioner. So I'm deeply interested in ai, how to use it safely and well, and I also really love helping lawyers to make sure that they've got their compliance all fixed up. So I'll be talking a fair bit about AI policies throughout the presentation.
(03:25):
That's an unashamed plug for the fact that I have a policy for sale and I'm very happy to help people when they're thinking through how to implement a policy in their law practise. So do reach out if you're interested. So what are we going to cover today? First of all, we're going to have a bit of a look at a where are we now? So it's been three years since chat JT burst onto the scene. So we just need to think about, well, who's in the room, where's everybody at and what are some of the things that we are concerned about? We'll have a brief look at that big question, will AI replace lawyers? And then we'll have a quick look at what is generative AI and how does it work? Because I think you really do need to understand some of the basics about the technology in order to deal with it realistically.
(04:09):
And then the bulk of the presentation is going to be on ethics use cases and your policy settings. So I'll be talking about a bunch of ethical principles, how to breach those ethical principles and how to not breach those ethical principles. So I hope that'll be helpful for you. There's a lot we could talk about when we talk about ai. So practical skills, how to prompt which tools I use in how I use them. I don't really have time to cover that, so I'm not going to be mentioning that, but we will be talking and we also won't be talking about general regulation of ai, so that's really not a topic that we have time to consider. If you're interested in that, I recommend Gary Marcus's book, taming Silicon Valley for that purpose. I work a lot with lawyers, as you can imagine, and it's one of those areas where people are very concerned about things that are going on.
(05:00):
So this is what I'm hearing from people I deal with. So people want to know things like how the tools work, which is really good because you have to know the basics of how to work in order to use them effectively. They're really concerned about data security, which again is a very important and very good and valid concern to have the accuracy and reliability of the outputs. We've all heard the horror stories about the cases and the hallucinations and that's the not going away and what we need to do in order to be compliant. So ethical and regulatory compliance provisions. Well, lawyers want to know about the tools. So people are asking, well, what tools should I use? How should I choose the tools? So what are the kind of criteria that you need to select the tools for the right purposes? They want to know about what do I need to do to have clear policies and guidelines in place?
(05:52):
What are my ethical obligations both for myself and my law practise? And also what are some use cases that don't compromise my standards? So how can I use it safely? Another thing to be aware of is what's actually happening out in the market at the moment. So in terms of lawyers, I think there's a lot of fomo going on as far as AI's concerned. There's a sense in which, well, I must be being left behind because I hear all this hype, or I see all this stuff on LinkedIn and it just sounds like everybody's making great use of AI and getting ahead of me and oh dear, what am I going to do? I can tell you that I know very few lawyers that are actually using it really, really effectively and in a way that saves them a lot of money and really changes the way they practise.
(06:41):
But there's a lot of us that are on the verge of it and exploring it. But I will say that the ones that are making the real changes are doing it very carefully and investing a lot of time and quite a lot of finances into how they do it. So if you feel like you're being left behind, you're not alone, but you can start today as long as the basics of what you're doing. Of course, perceptions of clients are also very, very important to understand, and unfortunately, there's quite a large number of clients that have the impression that AI is a viable substitute for a lawyer, which of course it isn't. So we really have to be very, very careful to go back to basics and to understand that we are not document generators. There is much more to being a lawyer than that. I'll talk about that again in a minute, but it means that we're going to have to change the way that we understand what we are selling to our clients.
(07:36):
So as one of my roles, I manage the front end of the complaint section at the level services board and commissioner in Victoria. And the big take home point was clients dunno what they're buying and lawyers dunno what they're selling. You really need to understand what you're selling to your client when you give them a legal service. It's not just generating documents or advice, it's not a linear experience, it's your guidance. It's being the expert in their corner. There is so much more to it. So think really, really carefully about your marketing and think carefully about what it is you're selling. Also, I think that there's some real angst and thinking about short-term gains from AI versus long-term strengths. And I think the smart lawyers are not neglecting things like training their junior lawyers and making sure that those things are not left untended. You might be able to save a little bit of time with ai, but it often turns out to be a false economy.
(08:34):
So it needs to be a very thoughtful approach that remembers that humans are pretty much essential as lawyers, which brings us along to the next topic, which is will AI replace lawyers? Now if you believe a lot of the hype that's out there on social media, you might start to get worried about that, particularly if you're listening to people like Sam Altman or the people that are in Silicon Valley and have a lot to gain from AI being prominent and being relied on. It's worth bearing in mind that a lot of these guys subscribe to an AI mythology, which is very powerful. It's that whole idea that once technology gets to a certain point, it'll just continue to improve itself. And then next thing you know, you're living through the Terminator movie. This is really not objective and it's not well-grounded in fact. And if you want to know more about this, there's a couple of resources that I'd suggest Gary Marcus.
(09:32):
I think his substack is fantastic. He's been an AI developer for well since the nineties basically, so he really knows what he's talking about and importantly, he doesn't have a particular dog in the fight. So I think he's quite a good one for putting the needle in the hype. But I think it's also worth understanding that a lot of the Silicon Valley guys are not proceeding with a cogent theory of what human intelligence is. And that's because there is so much about it that we don't understand that even the most advanced neuroscientists don't understand. There is so much about how our brains work, how intelligence actually operates that we really don't understand. And I think it's worth bearing that in mind that there's a lot of stuff that is not reducible to language and text. So one of the books that I really love and recommend a lot is a book by a neuropsychiatrist and neuroscientist called Ian McGilchrist called the Master in his Emissary, where he talks about the structure of the brain and how it actually works.
(10:34):
It's absolutely gobsmacking and a really great read, so I definitely recommend that. So I think also the understanding that intelligence involves a lot of common sense, tacit knowledge, stuff that we're aware of as we interact with the real world that simply isn't reducible to text. And I think that really comes to the fore when we have a look at this question of what do we actually do when we are lawyers? What does it mean to be a lawyer? It's surprisingly hard to define it. So if you think about the process of learning to become a lawyer, it's all about modelling and examples and experience, and over time you realise that it's far more than producing documents and advice. It's kind of like the longer you go on, the more you see really when as an experienced lawyer you'll start to understand that there's more and more depth and more and more clarity and focus that you can bring to a situation that is a lot more than just knowing the case law and understanding the documents.
(11:36):
So it's really a way of seeing rather than being an expert in particular information, we're part of a longstanding tradition and culture, a way of thinking, a way of understanding things, a way of arriving at the truth and a way of understanding how the system works with all its competing interests. So our client's interests, the interests of the justice system, the administration of justice itself. That's all very important. We develop a quality called esis, which is a Greek word meaning practical wisdom. There's a wonderful book called The Lost Lawyer by Anthony Kronman where he discusses this concept, this practical wisdom is something that the longer you've been in practise, the more you gain this and the more valuable it is for clients. And on top of all of these things we put on our ethics and professional judgement . So think about when you're thinking about will AI replace lawyers?
(12:33):
Remember it's far more than just performing a transaction or negotiating or producing documents. There's a lot more to it and understand what that is and help your clients to see the value that you can bring to them that cannot be replaced. So the human element of lawyering is critical and must remain. However, that doesn't mean that AI can't be really useful for us. So let's turn now to the topic of what is AI and how does it work? So there's a lot of old AI that you're already familiar with. So the e-discovery platforms came about probably 10 or 15 years ago using supervised machine learning. What this means is that they're trained on label documents. They're fed a whole load of examples that says this is what we're looking for, this is not what we are looking for. So include this, exclude that. So it learns to categorise and find similar documents.
(13:31):
It predicts then categorization of additional documents based on what it's learned. And the characteristic of it is that it's for a very particular narrow purpose and for that reason it's generally very accurate and very good. So that's the old ai. It can go through enormous numbers of volumes of documents that in the bad old days the junior clerks in or the junior lawyers in the large law practises had to go through mind numbing boredom. Oh, I'm glad that's not happening anymore. But generative AI is something quite different. So the way that it's been put together is what's known as self supervised learning. So basically it's been given a huge body of data, so text that's been scraped off the internet, all sorts of sources. It's basically tries to encapsulate as much human knowledge captured in language as is possible to collect. And what the algorithm does, what the system does is construct a model of patterns of knowledge as captured in the text.
(14:34):
So basically it's not looking at the concepts behind it, but it's looking at the language and the words and the elements of words and learning how they relate to each other in the context. So it's working off a vast closed data set and what it does is identify patterns in how concepts, facts, and ideas relate to each other. So there's an article I refer to at the end in the slide pack by Kerman, which is really, really helpful in explaining this in totally lay terms. The thing that's important is to remember this is a mathematical model in operation that gives you predictive outputs. It has no access to the real world, it has no access to tacit knowledge, professional judgement , the real person sitting in front of you. It has no knowledge, understanding consciousness, experience, no judgement . It's been called water complete on steroids. So it's kind of sophisticated pattern matching rather than intelligence, even though it sounds pretty intelligent.
(15:38):
Emily Bender, who has been a fairly well-known critic of ai, describes it as a text extruder, which is a bit of a funny way of putting it, but pretty accurate. Generative AI has a lot of strengths though. So if we think about it, if it's a text extruder, it's really, really great at generating text and manipulating it. So if you experiment with chat GPT, get it to write a poem about a spring, flower it to think, give it something to do. So if you could maybe take a clause from a will and put it into the chat JT and ask the AI to rewrite it in the style of DR use. So all of those things is quite fun and it's very good at doing that. The serious side of it is that it can change things like the education level, which may be very good for a client that doesn't have very good English.
(16:32):
It can really make things much simpler and help you to explain things much better without legal terminology, it's quite good at text identification and retrieval. So if you've got a huge bundle of documents, finding the right documents. So it's kind of like an enhanced e-discovery. It's very good at ideation. So thinking about working through a marketing idea or even thinking about have I really understood a concept in a particular case or a legal concept and how it applies. So that kind of back and forth dialogue where you're really testing and clarifying your understanding of something can be really helpful. The stuff that it gives you might not be what educates you, but just the exercise of having the conversation is the thing that's really helpful. Sometimes the information it gives can be really useful. So in and of itself, if it has the information within the corpus of data that it's learned from, sometimes it can give you really good accurate information.
(17:30):
A lot of the time it searches online as well and finds you relevant documents. But that function, it's not actually incorporating what it finds into its knowledge per se. So it doesn't learn from it the same way that you and I do, but basically takes that as part of a prompt. So the information is sometimes good and it produces very plausible, well-written texts. So generally speaking it sounds really good, but there are a lot of weaknesses to it as well. Imprecision of course, is one of them. So most software really aims to be as precise and predictable as possible. But every time you give a prompt to an ai, if you give a prompt to an AI and then ask it to give you a different answer, it will always give you a slightly different answer and nobody knows quite how it comes up with what it produces.
(18:20):
So it's this vast statistical exercise that's going on there and this results sometimes in hallucinations. So what that means is that you might ask for a case about a particular topic and it's not necessarily going to be looking for a case about a particular topic, but it will maybe generate something that sounds like what you like. So if you ask chat gpt to find some case law for you on a particular topic, it's generally going to give you some cases that sound really relevant but are completely fake. So it's really looking for the most likely and the most plausible answer rather than the accurate answer. And that's the thing that we really need to be aware of. It's full of biases that's really about the information, the nature of the information that's in the training set. If you want to see a great example of this, ask chat gpt to generate an image of a clock with its hand set at 1125.
(19:17):
What it's got to do in just about every case is give you an image of a clock with the hand set at 10 past 10. And that's because the vast number of images online of clocks and watches with their hand set at 10 past 10. So that's an example of a bias. The plausible well-written text is often a weakness as well because you don't dunno what you dunno, you can be lulled into a false sense of security. So it's always important to check very critically and carefully what it produces. And also in the putting together of ai, there's a fair bit of misuse of intellectual property there. So basically the developers, they just pull in everything that they possibly can in order to build a bigger and bigger training set because the theory is scaling improves the tool, not necessarily the case, but that's the theory.
(20:09):
And what that also means is that a lot of the AI developers for tools like Chat Jett are going to be looking for your confidential information and want to recycle your prompts. So that's one of the reasons why you have to be extremely careful about what you put into those open tools. Now, legal tools try to use AI without a lot of these and try to compensate for some of these faults. So they try to make it more like old ai. So narrowed applications. So there's got to be things like retrieval, augmented generation, so trying to restrict the AI from only pulling information, a particular data set. You'll see this in some of the research tools that will give you verified links. So that's one of the ways that they try to narrow it. There's a great deal of additional manual training, additional documents in the training set, and very careful instructions on the backend.
(21:05):
And also security features are important. So the fact that AI is trained on a closed set means that you can restrict it from incorporating new information into the future training sets. But it means that any developer that is using an AI as part of its tool has to have very careful settings to make sure that the information doesn't go back to a future training set. Now the use cases are many and various. So people use it for things like research, drafting letters, opinions, contract statements of claim and so on, changing the tone and education level of letters and documents, war gaming a situation. So trying to get it to act as a red team so that you can try to anticipate some of the arguments of the other side. One of the really good ones is interview transcripts and file notes. So VXT is good for that sort of thing.
(22:04):
With mine, I use the AI legal assistant and I usually record conversations on a sound recording and then upload it into the tool to get a transcript. I then get it to generate file notes and action points, and that's really quite useful. It's good for discovery and due diligence again, as long as it's specialised and narrowed and chronologies as well. So a couple of the good tools in those areas. So there's one called diligence, which is very well regarded and winning a lot of awards for due diligence. And there's one called Mary Technology, which is really great for chronologies and some of the more general purpose AR tools are quite good for those as well. Document uploads and sorting, so categorising them, putting them in folders, ideation again. So thinking about how you're going to strategize a case or something of that nature. Teasing out ideas and learning, comparing multiple client statements for consistencies and inconsistencies, putting things in a table form, all of these kinds of things are really good and helpful use cases as long as you do it right.
(23:10):
Which leads us now to turn to the question of ethics use cases and the policy settings you need to have in place. So the profession and its institutions are increasingly concerned about some of the things that have gone wrong already with generative ai, but most of the situations can be dealt with and our duties be made clear by applying the longstanding existing rules of ethics, the courts have particular guidance about what they require. So the Supreme Court of Victoria has a good guidance note, which has been out since 2023, and that really requires things like disclosure and also complying with any requests from the court for information about how AI has been used. The Supreme Court of New South Wales note is far more restrictive. I think it's actually personally a bit of a backward step. So it has a particular restrictions in it, like not using AI to generate affidavit material.
(24:08):
The uniform law regulators have put out some guidance and you can find that on the websites of the Victorian Legal Services Board or the Legal Practise Board of WA or the New South Wales Law Society. And all of the law societies have a fair bit of guidance and some articles and policies and other documents that you can find on their websites. The Law Society of England and Wales has some very useful guidance. And similarly, the American Bar Association, formal opinion five 12 is well worth reading, but they all pick up basically the same kinds of things. The only one that's a bit different is the a, a guidance because that talks about pricing legal services in an effective way. So that's a whole nother topic, which I could talk about for hours, but I'll just touch on that towards the end of the presentation. Alright, so let's think about ethical principle one.
(25:02):
And that is the principle that as lawyers, we much must safeguard the system. So this is about the primacy of the administration of justice, not misleading the court being candid towards the court and being candid, generally not diminishing the reputation of the profession. So this is obviously a very important and fundamental principle for us as lawyers. So how can you breach the rules with ai? For a great example, there's the notorious case now of al, which is a 2025 case that was in the federal circuit court and family court of Australia. The solicitor in this case produced a whole bunch of hallucinated cases to the court not realising that they were in fact hallucinated. And unfortunately these cases actually came from the research tool that was part of the AI in his practise management system because he failed to press the verify button. So he just thought these sound like good authorities, put them up and discovered much to his horror that they weren't real.
(26:05):
He's been dealt with pretty severely by the legal services board. So he's no longer able, he's gone back to having a supervised practising certificate for the next two years. He's had to shut down his firm and accept this very onerous supervision conditions. So that's the first example of a solicitor being dealt with in a disciplinary way, which was pretty bad for them. So it's very much a cautionary tale. So how do you breach it? In your research? You use hallucinated cases and you don't verify them. You might put in false or misleading affidavit or other evidence because it's been generated by Chap GPT or been generated in some way and you haven't carefully checked it against the primary documents or against a discussion with your client. You might get unverified AI generated statements or references from a client if they're putting up references, personal references as part of the deal.
(27:02):
If they're going for a plea or something of that nature, you need to make sure that the person who's given the reference actually thought about it and wrote it themselves. And sometimes it can be breached by just producing really poor documents. So there might be missing material misinterpreted information, and it all goes back to the primary sin of not checking, checking, checking and verifying. So how do you keep the rules with ai? The first one will be the tool that you use for research. Always check that very, very carefully. So make sure that there's a retrieval augmented generation, that there are verified links for every case that it mentions. And not only that there are verified links that you actually check the case and make sure it is actually relevant. So all of those things are important. Some examples of tools that are generally good habeas.
(27:57):
So this is a tool that I use. It's a small Australian startup, but it works off state and federal case law and legislation databases and that works quite well. I think one of the things I most like about it is that if it can't find something relevant, it tells you so it'll give you stuff that's generally relevant, but say I can't find anything on point. So that I think is great because it means it's telling you rather than making stuff up. People who use Lexus plus AI really like it. I haven't heard that much about co-counsel, but with all of these things, they're very useful for bringing up the cases. But again, you always have to verify them. Just because there's a link doesn't mean that it's right for the purpose. With references and statements, you always interview the client first so you'd know actually what you should be looking for before any kind of statement is prepared.
(28:48):
And we need to use the right tools for volume works. So as mentioned before me, technology for chronology's diligence for due diligence. And when you do the war gaming with an ai, make sure that you're not putting any sort of client confidential information into it and make sure that the ideas that it's giving you are actually helpful. So don't just take it as being an expert on the other side, it's just a text generator. Sometimes it can give you stuff which is actually really useful. Sometimes it'll give you stuff that is ridiculous, but every now and then there's something really helpful in there. So it's not a bad exercise to do. Alright, so ethical principle too is protect your client. So this is seen in rules like act in the client's best interest, be honest, be competent, diligent, and prompt. Avoid compromise to your integrity and independence and give clear and timely advice.
(29:44):
And of course also this really important one, keep your client's information confidential. So all of those things are really important. Okay, so how can you stuff this one up? The first one is to give AI generated advice or documents without checking them carefully. So basically you might get some information that sounds very nice and just hand it out without checking it really, really carefully. And I don't just mean a cursory glance, I'm thinking about it critically and slowing down and really having a look at it. Now you might think that's a bit of a waste of time, but it's actually, it's useful not just because you're getting a better output if you check it carefully, but it really helps you clarify your thinking. Another way to stuff it up is to use unverified case law in the same way that as mentioned before. Another one is don't tell the client you're using generative AI or how you're using it because if it turns out to be wrong and if it turns out you are using it, there is nothing more designed to bring the reputation of the profession into disrepute.
(30:53):
And I think it's also really important to disclose to the client what you're using and how you're using it, what sort of tools they are, that they are tools that help you make your work more efficient and effective, but they're never the tools that you don't outsource to them and they're tools that you have verified are secure and appropriate. And this leads us to the big one. Don't use chat JT or another unsecured tool with client information. This is an absolute no-no. So even the versions of chat JT or any of those kinds of tools that give you an option of not allowing it to learn from it, the terms and conditions are generally the sort of thing that you can drive a truck through. So you really have to be very careful and if in doubt, just don't use them. Another one is don't talk to the client about their use of ai.
(31:44):
You'd be surprised how many clients would feed your information and documents into AI and ask the AI to explain it to them. This is crazy. It breaches their own confidentiality and there's quite a possibility that they may be stuffing up their claim on privilege of documents as well. So always talk to the client about not putting information into chat JPT. If the client's really concerned about how do they understand it, why don't you use your own AI to give the client an executive summary or to basically to summarise the document and to put it into plain language with the documents. So it can be part of your advice to the client. Another one you can do is overcharge the client for AI generated documents. The documents that you create have to be good value and often the AI generated documents are often nowhere near as good as the templates that back of your hand and that you've worked over and over and over and are confident that they're right.
(32:44):
So just because you can generate a document with AI doesn't mean that you should. So there are some cases where, think about the intellectual property that you've developed, think about what you're using that is reliable and that you really understand and prefer that to an AI generated document. And another one dabble in unfamiliar areas of law. Sometimes the AI can make you feel overconfident. Just because the AI makes it sound good doesn't mean that you can dabble in an unfamiliar area of law. That's always the advice of the insurers and it's really great advice and it doesn't change. In fact, it probably gets more pointed with the use of ai. Alright, so how can you keep the rules with ai? Use legal specific AI tools and check the output. So basically no outsourcing of your work to the ai. You always have to check what it gives you.
(33:35):
Experiment with how to speed up administrative tasks. So one of the things that I often say to AI developers is that we don't want another lawyer. We want an AI powered assistant. So yeah, it would actually be a lot more useful if it could do administrative tasks for us. Again, use the right research tools, verify the output, and check the input, disclose what you use and how you use it to the client. So that's really important. And make sure that you've got the right AI policy settings in the firm. So with your staff, you specify what tools they can use and how So, and I'll talk in a second about how you choose the right tools and you need to make sure that staff are trained in how to use it and have a basic understanding of the do's and don'ts, and also that there's some enforcement of the policy.
(34:27):
So policies are useless if you just stick 'em in a drawer and forget about them. There has to be some consequence for doing things incorrectly. And there has to be also some incentive to let the practise know if something's gone wrong with ai. So tools have to have the right security settings. The security you'd need to have cloud-based on a local server and encrypted end to end. That means that the process of getting your document from your law practise to be processed and then back into the law practise has to be encrypted all the way through. There needs to be security certification. So the minimum that you're looking for in Australia is the ISO 27 0 1 standard needs to be suitable for use with client confidential information. So again, check the terms and conditions really, really carefully and talk to your vendors about this. You need to make sure there's no data leakage so that it can never be used in future training sets or to improve the model or service.
(35:25):
So every time you see the term improve, run a mile data ownership, you need to make sure that it's you that owns the data, not the AI that it's always yours. It always comes back to the law practise. So some examples of tools that have the right kinds of security settings, but again, I'll do the caveat that you have to check for yourself. The AI legal assistant, Archie, which is the AI from Smokeball Leap ai, Lexus plus AI co-counsel, and also Cleo is introducing an AI very shortly. So those sorts of things are the ones that have the right security settings. Ethical principle three, supervise your practise. So the buck stops with the principle, the principle's responsible for ensuring the practise is compliant. So there's sections of that nature in the uniform law and in all the legal profession regulations across Australia and also in the conduct rules, the responsible solicitor must supervise all legal and non-legal staff working on a matter.
(36:28):
So basically your practise has to be set up in a way that's effectively handling and safely using ai. Okay, so how do you breach this rule? The first one is the problem of shadow it. So this is a situation that happens where somebody's worked out that a technology tool will help them to do their work faster and easier. And they use it without telling you. And this is unfortunately a very, very, very big problem that there are a lot of people that use CHATT without telling the boss. So first of all, that is something that is a real problem because confidentiality can be breached. And also you might think, oh wow, this person's a really good writer when in fact they're using chat GPT, and you might not check the workers carefully as you might otherwise. Another thing that you can do wrong is have no policy, no training, no governance or enforcement.
(37:23):
You really need to think carefully about how your law practise uses ai, what's in what's out, and what your staff need to know about what the rules are around it. No specification of which tools staff can and can't use. So if you don't tell them what they can and can't use, the default is likely to be something that's free and easy. Chat GPT all over again, no guidance on how to use existing tools. So say for example, some of the research tools, you ask them general questions, but they're not actually secured in such a way that you can put client information into them. So again, knowing the terms and conditions and knowing how you can and can't use it is really important. You can fall asleep at the wheel. You can think that your staff are generally ethical and know what they do, but you don't talk to them about it.
(38:14):
You don't monitor, you don't enforce. And another thing is having inflated expectations of AI, thinking that it can be a substitute for a lawyer when this is never ever the case. And a lot of people don't know this. So there may be staff in your practise that think it's as good as a lawyer, but it's not. And finally, failing to supervise junior staff effectively, they don't know what they don't know. And if they're using ai, they can often really miss out on the learning experiences. So it's really up to you to monitor how they use it and to make sure that they do still learn and understand the underlying legal principles that they need to be following. So how do you keep the rules? You have a really good AI policy, it backwards the principles. You have general rules of thumb and guidance on how to use AI well.
(39:05):
You choose the right tools, check the terms and conditions very carefully to make sure that there are appropriate for the use case. You make sure it has the right security settings and you make sure you cover off on your human risk management, which means that people are trained, that they talk to each other about it, the work is supervised, that there's governance in place about which tools to use and and what kind of use cases. And also that there's an enforcement. There are consequences when things go wrong. Basically the rule here is pick the right tools for the right jobs and use them in the right way. What this is generally going to mean is some experimentation in the practise. There can be different use cases that people can experiment with if they have the right tools, if they know that it's a secure tool, there is room to experiment with client confidential information.
(39:59):
And it's often a good thing for people at the senior associate to partner level have a bit of a play with it and see what works well and what doesn't and share it around the firm. So all of those sorts of things are part of your evolving relationship with this technology. Now I've been around long enough to remember when email was a bit of a controversial topic in law practises. So it used to be that you would just send each other letters and then everybody got a computer on their desk around 19 93, 94, and that's when there was a raging controversy about how to use email and faxes. Things like we used to send the fax and then send the original in the post and that people used to do that with email as well. So all of those kinds of things, all of those wrinkles were worked out now and know how to use them more effectively.
(40:51):
And I think that will be very much the case with AI as well, but we really need to understand what we are working with and work on it together. Okay, just finally and very, very quickly ethical principle four. I'm barely going to scratch the surface on this, although I will say it's one of my favourite topics and I've spoken about this quite a lot, is the whole idea of transparent pricing. Now, I think this only becomes a live issue when the AI starts to really change the way that you do your work. And again, as I mentioned before, the important principle there is to articulate what it is you're selling and to sell it at a fair and reasonable price. So technology can speed things up, technology can make things work better for you, which might mean that the hourly rate is not going to be a suitable model for you anymore because the way that you use time and the way that you have that as an input is going to change.
(41:47):
But make sure that what you're giving the client is not just the same stuff, speed it up, but something with a deeper quality and something that represents real value for them. So I think really when it starts getting to the point of business transformation, that's when we really need to start thinking about how to change what we do and how to frame what we do. So the cost of the services has to be fair and reasonable. So the uniform law section 172, all of those kinds of criteria must be in place. So that means it needs to be fair and reasonable in proportion and reasonable in terms of the amount of work that's actually warranted by the legal problem. So those sorts of things we really need to bear in mind. Breaching the rule with AI generally means just kind of generating stuff and trying to sell it at the same price.
(42:38):
Again, it's the quality control issue and the way that you use it effectively, that is most important in understanding what represents real value for the client. But being ethical with AI in this area means understanding what you're selling and thinking about how to talk about value with the client in a much more sophisticated way. So I think that this is a problem for us because we're used to doing things by hourly rates, but the fixed value, the fixed price conversations really help us to focus back on what is the value of what we're doing for the client. And that's really probably the most important and one of the most helpful problems that AI has given us. Broader professional implications. I think it has great value and great possibility for access to justice. But having said that though, the caveat is that the AI has to be really, really carefully trained.
(43:32):
So I have a friend, David Burton, he's the founder of a firm called Law Lux, which is heavily leveraged on ai, but my goodness, he does a lot of training with that, that the AI is so carefully worked with very, very labour intensive and very, very carefully scrutinised to make sure that what it's producing is of sufficient quality to give to clients. So it is a possibility, but again, a matter of enormous investment in time and fraud at this point. But bear that in mind that there's about 80% of the people who experience a legal problem that need a lawyer, find it very difficult to get appropriate and economical help. So there's a huge possibility there. And also don't, again, don't lose sight of the why. Positive ethics and professionalism are more important than ever and relationship person to person, being the expert in a person's corner who cares them when they're going through one of the most difficult parts of their lives is incredibly important and valuable. So remember that your expertise, your familiarity with the legal system, your resis, your practical wisdom, those are the things that clients really love and value. So with all of that in mind, this is how we use AI to practise better and to be more effective lawyers.