Joanne Sprague is a recognized social impact leader with expertise in scaling programs, partnerships, and campaigns globally....
Angela Tripp is a Program Officer for Technology with the Legal Services Corporation. She is responsible for...
Jennifer is a Principal Program Analyst in the State Bar of California Office of Access & Inclusion....
As Professor of the Practice and Co-Director of the Program on Law & Innovation and the Vanderbilt...
| Published: | November 11, 2025 |
| Podcast: | Talk Justice, An LSC Podcast |
| Category: | Access to Justice , Legal Technology |
Legal technologists discuss a recent survey that found that legal aid attorneys are adopting AI at a faster rate than other legal professionals on Talk Justice. In May of 2025, Everlaw partnered with the National Legal Aid & Defender Association, Paladin and LawSites to conduct a survey of 112 legal aid professionals. The survey formed the basis of the report, “The AI Advantage: How Technology Can Help Bridge the Justice Gap,” which was published in September. It found that 74% of the legal aid organizations surveyed are already using AI in their work, which is double the generative AI adoption rate of the wider legal profession.
Angela Tripp:
But if we can say No, look, it’s got the same error rate as a human does in whatever the task is, I think that that helps bring along the folks who aren’t sure whether it’s good for Clients Alliance.
Announcer:
Equal Access to Justice is a core American value. In each episode of Talk Justice an An LSC Podcast, we’ll explore ways to expand access to justice and illustrate why it is important to the legal community, business government, and the General Public Talk. Justice is sponsored by the Leaders Council of the Legal Services Corporation.
Cat Moon:
Hello and welcome to Talk Justice. I’m Cat Moon, your host for this episode. As co-director of a law school AI lab, I’m constantly asking can AI help close the justice gap? Well, a new report suggests that we are already seeing the answer. Apparently legal aid professionals are embracing AI at twice the rate of other lawyers and other results are promising as well. Here to unpack what this means and what we can do about it are Joanne Sprague, senior director and head of Everlaw for Good, Jennifer Zelnik, principal Program Analyst for the State Bar of California Office of Access and Inclusion, and Angela Tripp, program Officer for Technology at the Legal Services Corporation. Our conversation today centers on a new report that came out earlier in 2025 titled How Technology Can Help Bridge the Justice Gap. And Everlaw and a few other folks are behind this report and we’re going to dig into the report and kind of extend its application and think creatively and critically about what we can do with this data To start us off, it seems to make sense to me to talk about what the impetus of this particular research was, which is very interesting, maybe a little bit surprising.
Joanne, could you kick us off by sharing with us the impetus how this came to be?
Joanne Sprague:
Yeah, for sure. So each year Everlaw puts out an e-discovery innovation report in partnership with a couple of leading legal tech trade associations, a CIDs and ilta. It explores tech adoption and viewpoints among the legal profession at large as part of our Everlaw for Good program and are focused on helping to level the playing field for justice. We work with a lot of legal services organizations that are focused on the public interest side of the law, and so we thought it was high time that we did a similar survey of legal services professionals to understand how they were thinking about tech and AI adoption in particular in this time, to help determine what the legal industry might be able to do to help ensure that access to these latest innovations were or accessible to the legal aid industry. So we had that idea, started talking to a few folks and our partners at N-L-A-D-A, Paladin and Law Sites heartily agreed. So we were really excited to launch the survey at the Equal Justice Conference in May of 2025 and then publish the results in September of this year. And I was really excited and frankly my mind was a little bit blown to see some of the results from that report.
Cat Moon:
Yeah, it is pretty fresh off the press and was really interesting to dig into. So a big reason why I’m excited we’re having this conversation, but the kind of surprising pieces I’d love to hear from all of you. What about this data surprised you the most? And we’ll start with Jennifer, Jennifer passing the virtual mic to you. What about this surprised you?
Jennifer Zelnick:
Sure. Maybe I’ll pivot the question a little bit and I’ll say do it. I’ll say that I actually was really excited because a lot of the data mirrored a survey that we conducted with State Bar IOLTA grant recipients back in October of 2024. So I was really glad to see that the findings of that survey rang true a year later and demonstrate that the findings are representative not only of California legal aid, but of legal aid more broadly.
Cat Moon:
That affirmation I think is really, really telling and important. Thank you. So now I’m going to pass the mic to Angela. Angela. So the question, let me reframe could be what surprised you or didn’t surprise you or pleased you about the results of this research?
Angela Tripp:
That’s an excellent version of the question, Kat. It didn’t surprise me. The initial questions didn’t surprise me because at LSC, we launched our AI peer learning labs in the early part of this year, and when people registered, we asked them a few questions about their personal use and their organizational openness and their organizational adoption. And we saw that personal use was fairly high. Organizational openness was pleasantly high. It was very happy to see that most people said their organizations were very open to the use of ai. What we saw in our survey, because we specifically asked about organizational adoption of AI was much less adoption. It was actually a straight, not straight, but a downward line indicating fairly little organizational adoption. And I think that makes sense because I think the ever loss survey was more about people’s individual use of AI in their workflows, whereas we were looking to like, what is your organization doing? What sort of large scale projects has your organization invested in and adopted? And I think that the data makes sense together because the adoption of an organizational, an organization’s approach to AI is going to be led by likely the individuals who themselves are already using it and have thoughts on how it can improve the overall operations.
Cat Moon:
So there are a lot of threads I’d like to pull there, but before I do, I do want to shift to Joanne and this question keeps evolving as you all are talking, so maybe Joanne I’ll phrase it as what was most interesting to you about the results
Joanne Sprague:
Of
Cat Moon:
This data?
Joanne Sprague:
Yeah, what I thought was really exciting, compelling from ever law’s perspective was comparing these results to our broader legal professional e-discovery innovation report. And it’s not a complete apples to apples comparison. I don’t want to overstate that this is a scientific control representative study, but seeing that the rate of adoption among legal aid professionals was approximately two times the rate of the broader legal industry, I thought was very exciting and surprising from the perspective that having worked in the social impact broader industry, not legal aid specifically, but across social impact issues and nonprofit areas for the past couple of decades, there’s this sort of stereotype or bias that public interest in nonprofit organizations are laggards when it comes to tech adoption. They don’t have the resources or the knowledge or whatever stereotype you want to put to them is that they’re the last people to adopt a new technology.
And in this instance, and particularly with legal aid professionals, we’re seeing that really being flipped on its head. Now, once we have those results, I think it’s often easy to say, well, of course necessity breeds innovation. There’s fewer resources, there’s just a need, right, given the yawning justice gap, but taking a moment to just reflect upon and I think have a lot of respect for the gusto with which legal services professionals have really taken to adopting these new technologies, in particular some of the generative AI capabilities that can really make a difference. In this case in particular, given that it’s not a foregone conclusion that this would happen, right? There’s a lot of opportunity, but there’s also very valid concern and negative skeptical narratives about how generative AI is going to affect all professions and all social issues including the legal industry. And so I would not have been surprised if the data had been the opposite, that there was much lower adoption compared to the broader legal industry. So to me, it was very encouraging to see that despite those headwinds, so many legal services professionals were really diving into this very early stage of this AI revolution and leveraging these technologies and finding ways to use them for the most pressing urgent use cases.
Angela Tripp:
I just want to add to that, Joanne, that I think some of that hesitancy, the extra hesitancy from the legal aid community probably comes from the fact that predictive AI has been used for a decade or so in ways that are incredibly harmful to legal aid clients, including in administration of public benefits such as food stamps and unemployment assistance and also in criminal cases. And so I think that also is why the organizational adoption, this sort of official across the organization is not as great as the individual adoption because that unfortunate legacy has made the legal aid, particularly the leaders of legal aid more hesitant. But I think we’re definitely seeing much change in attitudes over the last year. That’s a great point.
Cat Moon:
It’s an excellent point, and it’s very nuanced. I mean, there are a whole lot of factors that feed into individual and organizational choices with respect to this technology. And we’re going to jump into some of the identified concerns and potential barriers. Absolutely. In this conversation, I wanted to share just a point that stuck out to me, which feels very optimistic. And so one of the questions centered on how likely do you think, well, looking ahead, if you were using AI to its full potential to speed time consuming tasks, how many more clients, percentage wise do you estimate your organization could serve? And overwhelmingly folks responded that it could increase their ability to serve, right? A small percentage said more than 75%, 7%, a bunch of folks, 46% said from one to 25%. And so seeing the potential, I think, and being kind of driven and guided by that, I think is really exciting to watch because from my view of now nearly 30 years in practice and close to 20 years, really paying close attention to how lawyers are or mostly are not adopting technology historically, that this is also consistent with the uptake, although greater, but really a shift in attitude in the legal profession that is really exciting to see.
If for no other reason, now we know we can change our minds about things, right.
That growth mindset’s amazing. It is possible. So, oh gosh. Well, okay, whole bunch more to dive into. So let me ask this. So the report shows as Joanne pointed out, that legal aid is adopting AI at about twice the rate of the broader legal profession based on the data that Everlaw has collected. And one way to look at this is that the under sourcing aspect of legal aid is driving the innovation that people are more willing to try new things because there’s this really acute need. What do we think of that potential paradox? Do you think that’s in play here? And if so, how? Jennifer?
Jennifer Zelnick:
I think organizations are realizing that the justice gap has the potential to grow exponentially with ai. And what I mean by that is legal aid organizations realize that big law is using this technology, and if Legal Aid doesn’t, the chasm between the services that legal Aid can provide versus what private law firms can provide will continue to grow, and the resources available to legal aid clients will continue to be diminished compared to those who can pay for private legal services. And with that in mind, we’ve seen grantees really embrace AI in ways that feel comfortable to them. So I’m thinking in particular of a small legal aid organization, legal Aid Society of San Bernardino, where we have an executive director who is really passionate about AI and has found ways to use it that he identifies as lower risk. So for example, training new staff and creating a lot of internal resources to support attorneys and other legal professionals. And in that way, I think programs are really eager to try to meet the moment rather than shy away from the technology,
Cat Moon:
Which is exciting growth mindset, right? Let’s embrace the new and the opportunity. So Angela, what are your thoughts on this potential paradox?
Angela Tripp:
Well, I am very glad that this whole podcast is really about the innovation capacity of legal aid programs because I agree the Legal Services Corporations Innovations and Technology conference last year, we celebrated our 25th year. And so we’ve had 25 years of legal aid technologists coming together, and it started as a very scrappy, ragtag group of, I don’t know, I think like 40 people. And now more than 800 people come to the conference most years. The innovation capacity of legal aid, it is often driven by scarcity, but also driven just by creativity. And as you said, the willingness to try something a different way and the types of clients and the types of cases that legal aid deals with often requires that kind of creativity. And it’s not just because of scarcity, but also because a lot of the cases that legal aid organizations take are just kind of different from mainstream private lawyers. Not a lot of private lawyers do public benefits cases or foreclosure cases because there’s no, the clients don’t have money to pay for it. So the creativity and the ingenuity has been a part of legal aid for a long time.
Cat Moon:
Absolutely. We see that in spades at every ITC, which is really, really exciting. So in many ways, not at all surprising that folks are running with this. Joanne, you have a perspective with Everlaw for Good. So your job is to empower folks in legal aid and other organizations to use this technology to supercharge their work. What are you seeing your boots on the ground?
Joanne Sprague:
Yeah, I mean, we get a new organization interested in our technology and using it through ever love for good every day. So it’s really exciting and that has only continued to accelerate over the past couple of years. And we’re very grateful to organizations like LSC for hosting conferences like ITC and some of these others that help us just build awareness and get the word out about the offering that we have that we think can help legal services organizations. So I think the ecosystem and its growth and the continued sort of support of these areas, I can talk till I’m blue in the face about how our technology can help organizations, but that doesn’t make much of a difference until someone like Angela or Jennifer supports the work and the potential for impact. The other thing that I’ll just mention, because I think the points that have been made about the paradox are really powerful ones and really good.
I think that this result also dovetails really nicely with a bit of a surprising result from our question about obstacles that the cost and technical resources were actually two of the lowest barriers to adoption of these AI and technology innovations. I spent a lot of time talking and hearing from other folks about the concerns that the cost of these technologies, which is in large and increasing over time is going to widen the justice gap and be a real problem. And to see that at least at this juncture, at this point in time, that is not the highest concern or obstacle to adoption, I find really compelling. Now, maybe that’s just because there are higher level, higher priority concerns, which is not great. But my hope is that that also is an indication that these technologies, at least at this early stage, are more accessible than we might think ever offer good offers our product for free.
We offer as much AI as we possibly can for free, and that level of free offering continues to increase over time. We think that that’s important because of our tool, but we also recognize that our tool is a very tiny little sliver of the solution in the technology ecosystem. And so one of the things that we hope to do is to be a role model or a first mover to inspire other companies in the legal tech and broader technology spaces to offer their products at free and highly discounted rates to the nonprofit sector and to legal services organizations. And I think right now, again, who knows what the future looks like, but I think right now we’re seeing some level of interest in being responsible companies in helping to address the justice gap by making sure that these tools are available to organizations. And I just hope that that trend continues and more organizations come to ITC and more organizations to offer programs like Ever Loft Good has done and that we’ll continue to see that support.
Cat Moon:
So I was surprised to see that cost was not closer to the top. And I think there are probably a few reasons for that. And I think you’re absolutely right that the ecosystem, the legal tech ecosystem has this opportunity to really step up and make technology available to the folks who are situated to do by far the best good with it. So we’re not going to talk anymore about cost because there were some higher level concerns. And I think these are worth addressing because another ecosystem challenge, and I really think this is a challenge for this entire community, is really wrestling with some very valid concerns and how these are going to be managed responsibly, and it’s kind of daunting as an individual user, frankly, to face these things. So the report identified data privacy, accuracy concerns, concerns with hallucinations, as well as ethical issues which are unique to legal professionals, to lawyers as top barriers. So we’ve got this tension because we want to innovate and we see this potential, but we have to deal with these very real and important issues. So how do we innovate responsibly with this tension? What does that look like? What can our ecosystem be doing to support this? And feel free, whoever wants to jump in on this one, it’s a big one.
Jennifer Zelnick:
So in May, the state bar launched the Legal Aid Justice Technology Collaborative, which is a statewide effort to support its 114 and growing grantees in understanding and safely adopting emerging tech in order to expand access to justice at scale. This is a work plan that currently carries us through May, 2027, and it includes 15 deliverables as well as three guiding or foundational principles. This work plan was established after extensive research within the state bar, including with the help of a consultant and in consultation with our legal aid grantees. And I think the three foundational principles speak to some of the issues you addressed, Kat. So very briefly, those are to foreground the ethical use of technology as a tool to advance access to justice, community ownership and supporting grantees across tech maturity levels and financial resources. And especially with that first one, I think it was really important to the working group, which consisted of grantees, academics, commissioners, and staff to ensure that the work plan is really guided by all of the ethical considerations that foreground not only the work of lawyers, but in particular legal aid and making sure that this work really serves low income and underserved Californians.
And I think that these things are really important to attorneys, but they don’t need to be a reason to stop thinking about innovation. Grantees have expressed to me a lot of excitement about partnering with organizations who share their values, and that can mean programs like Everlaw for Good, that offer free use of their amazing technology. It can also include finding organizations that share ethical commitments or commitments to ensuring that they address the environmental concerns that come along with AI data centers.
Cat Moon:
So it is just fantastic to hear that the optimism and excitement continues and that folks are finding a way to not be bogged down by these potential negatives, really some of the cons. So moving to the national stage, looking at it more across jurisdictions. Angela, do you have any thoughts on how we wrangle with these potential concerns?
Angela Tripp:
Thanks, Kat. I would love to talk about the accuracy concern because that is something I’ve just really been spending a lot of time thinking about lately and the importance of evaluation and testing and iteration. Every project needs to be evaluated. Every technology project needs to be tested and evaluated, but when you add in that third layer of ai, it’s a third universe of testing and evaluation that needs to happen. The legal aid technology community has gotten very good at doing usability testing and making sure that the tools that they create, which many are meant to be used by average people with no legal background to making sure that they’re easy to use and that people can figure out how to get the information that they need. And legal aid is also very good at testing to make sure that the tool works the way it was intended in a document assembly tool.
Does the name go in the right place? Do the numbers go in the right lines? But testing for accuracy with AI is a new universe. And it’s interesting to watch this strategies develop. And even with closed universes, the way that AI works is that it never gives the same answer twice. And so you need to do a lot of testing in order to make sure that specifically if the answers are generative, like say in a public facing chat chatbot, that the accuracy is at a place that is acceptable. And even if it’s not public facing, other tools obviously need to be tested thoroughly for accuracy. And that testing doesn’t stop when you go live with the product. AI models change your source information often changes. And so that testing some level of routine routinized strategic testing needs to happen. And that is how you can overcome the accuracy challenge.
And I think a bonus is that when you do that testing and you can show how accurate your tool is and compare it to human levels of accuracy, that is a way that you can really bring along some of your skeptics. A lot of legal aid people say to me, well, this feels like second tier justice and that’s not okay for our clients, but if we can say, no, look, it’s got the same error rate as a human does in whatever the task is, I think that that helps bring along the folks who aren’t sure whether it’s good for clients.
Cat Moon:
That is such an excellent point comparing the AI output to the human output. And when they both are reaching the same level, it’s kind of hard to have an argument that it’s not as good. So thank you. And I think what strikes me Joanne with Everlaw for Good is you are making, your company is making available a fit for purpose tool. And so one thing we’re talking about, there are a lot of different ways this technology can be engaged with, so folks can engage through a consumer tool like chat pt, even at a high paid level, we could be using the technology to actually build a custom tool. Some legal aid organizations are doing that. Legal aids can use tools like Everlaw. And it strikes me that actually the concerns there might be a range of concerns based on where you’re tapping into the spectrum of tools. So Joanne, you’ve got a tool made for lawyers, so maybe there’s a little more comfort and confidence there.
Joanne Sprague:
That was actually going to be one of my main takeaways from some of these obstacles is as an individual, I am of the not another tool variety, the effort of having to research and check the policies and do all that vetting, it’s real. And that’s time taken away from other really important work to be done, which is one of the reasons that I’m so grateful for communities like the one that Jennifer has put together through the state bar, like LSN Tap that’s organized through the LSE community for being able to spread some of that vetting around once an organization has done a deep dive to say, what’s the best tool for intake that’s off the shelf, right? What’s the best tool for voice to text, right? We don’t have to all individually repeat the same level of research, but I do think under that guys, there is a lot of validity to leveraging purpose-built tools in this environment where the stakes are so high, getting it wrong when it comes to accuracy has real human consequences, not just for the lawyers who embarrassed or sanction, but for the individual clients.
And in a state where we’re still, it’s so easy to forget that we’re still what year two, year three in this generated AI journey, everything is really early. So I think there are some universal things to look into when it comes to purpose-built apps. Things like data privacy was the biggest, and confidentiality was the biggest concern when it came to obstacles to adoption. So things like zero data retention, things like not training on data or the prompts that you put into a tool, those are really critical, but they are also things that exist now that sort of industry standard at this point. If you’re using a purpose-built tool for legal rather than something that is consumer grade and offered at free or at heavy discounts, sometimes the trade-off is your data is for sale. And so leveraging some of those tools that have really been built with the constraints and the data and security requirements of the legal industry in mind, I think is a good stop gap against some of that.
The other good news, and I don’t want to in any way downplay the risks that Angela you have really eloquently brought up, is that accuracy is going to get better over time, right? We are still so early, there is going to be some natural evolution in this space, and companies have a market incentive to make their outputs much more accurate and less likely to hallucinate over time. One of the things that I love about some of our AI capabilities is that they are trained and built to say, if you ask a question, a natural language question doesn’t know the answer, it’s trained to say, sorry, we don’t have the information for you and not to go out to the open web. And so I think there are some safeguards that get built into tools depending on the importance of having that accurate information. And those will just get better over time, which is also to some extent, a plug for leveraging existing market tools rather than trying to build your own when you have to then invest continuous time and maintenance and innovation of those over time.
Not going to be the case for everyone. There’s great cases for building your own internal tools, but that’s just another consideration to be put into play. And then my last plug is for all of us in this industry and outside to just keep up the market pressure on the companies that are building these models and the companies that are building tools on top of them to take their ethical responsibilities seriously, because that was the number three concern. And there will eventually be consolidation in this industry, and we want to just encourage and do what we can to ensure that the most ethical and responsibly built companies are the ones that win in the AI race.
Cat Moon:
So many takeaways there, but I will highlight that we are in what my colleague Mark Williams calls the a OL phase of Gen ai. We’re still dialing up, and so there is so much yet to come, but we’ve been through this evolution of a sort before and that should help guide us. So we are about out of time, even though there’s a whole lot I’d like to dig in based on what you all just shared, but we’re going to move on to a lightning round, which I always try to do to invite folks to share very practical action items. What are some practical takeaways? So whether somebody who’s listening is just starting to explore generative AI or they’re already using it regularly, can each of you share at least one concrete thing that they should do in the next month to move their personal AI strategy forward? And we will start with Angela
Angela Tripp:
Go. It’s very self-serving, but I would recommend joining the legal services corporation’s, AI peer learning labs because we’re bringing together organizations who have the same challenges and problems and considerations that legal aid organizations have, talking about how they’ve overcome them. And so it’s a great experience. We cover many different topics, many different ways to use ai. It’s a great opportunity to share your expertise if you’re feeling like an expert or to learn more from other organizations. It’s just a great way to increase your knowledge and confidence with AI innovation in the legal aid space.
Cat Moon:
Excellent.
Angela Tripp:
Oh, can I share one
Cat Moon:
More? Oh, yes. Go.
Angela Tripp:
My other favorite takeaway that I’ve heard many, many times is when you’re thinking about how to use ai, start with the problem, not the tool, not the solution, but focus on a specific problem that you’re trying to resolve and ask yourself, could some of this be automated? And which part and how? And that’s been really, I think, the best piece of advice that I’ve heard
Cat Moon:
And that will be relevant whether you’re a newbie or an expert, that will always be a useful approach. Excellent. Thank you. Alright. Joanne, what you got?
Joanne Sprague:
Yeah, very similar to what Angela just said about identifying the problems that can be solved. What I would do is like today, do a rapid survey of your team. This could be one question, Google form, or literally pulling everybody together versus lunch and just going around and ask them what is the number one most annoying, frustrating, time sucking, rote and redundant thing that you have to do that you would love to take off your to-do list? Figure out what the number one to three tasks or frustrating things that your team has to do. And then go and find an off the shelf AI tool that addresses that, whose goal is to address it and just start trying it with an enthusiastic group of team members. And you don’t have to do a lot of legal research or days or weeks of looking into this in order to find that use chat, GPT or Gemini to ask, here’s my problem.
What are the top three AI solutions? Or go to a community forum like LSN Tap or the AI Learning Labs and ask that question, find out what others are using. And then if it’s expensive, go and ask the company if they’ll give a nonprofit discount. A lot of these folks will do this and say, Hey, listen, if we like this, I’ve got 133 other organizations or 2000 people or whatever who could potentially be users as well. And you might be surprised by how willing they are to offer it at free or reduce costs. So that would be my first dip of a toe into the AI waters.
Cat Moon:
Boom. Okay, Jennifer, bring us home. What you got?
Jennifer Zelnick:
Okay. If we’re thinking purely about someone who needs to learn how to use ai, one suggestion is to start with something that’s really low stakes. So test AI on something where you already know the answer. This can be a legal question, it could be something else. And test how to get the answer that you want on a larger scale. I would say a piece of advice for funders is to think about how you can collect good data without overburdening legal aid organizations to find out how your grantees are already using ai, how they want to use it, and any questions and concerns they have. And then use that data to help scale up usage, make a work plan, and think about next steps really strategically
Cat Moon:
Because that data is power right there. And speaking of this data from this report, I think is actionable power as well. I’m so grateful to all of you for joining me to have this conversation, to dig into it and to really extend it and apply it. And I hope listeners found a nugget in here somewhere that is relevant to their work. And thank you all so much. This has been delightful. I wish we could keep talking.
Joanne Sprague:
Thank you. Thank you, Kat. I really appreciate you taking the time to highlight all this work.
Cat Moon:
It’s my pleasure. It’s a blast. The work you each are doing right now to ensure AI helps narrow rather than widen the justice gap is exactly what this moment requires to our listeners. If you want to read the full report we’ve been discussing, we will link to it in the show notes. And please do check out LSCs AI Peer Learning Labs Talk Justice is brought to you by the leaders’ counsel of the Legal Services Corporation and Legal Talk Network. If you like what you’ve heard, please be sure to rate and review the show and subscribe on your favorite podcast app.
Announcer:
Podcast. Guest speakers views, thoughts and opinions are solely their own and do not necessarily represent the legal services corporation’s views, thoughts, or opinions. The information and guidance discussed in this podcast are provided for informational purposes only, and should not be construed as legal advice. You should not make decisions based on this podcast content without seeking legal or other professional advice.
Notify me when there’s a new episode!
|
Talk Justice, An LSC Podcast |
Join us as we explore innovative ways to expand access to justice, bringing together legal experts, technologists, business leaders, community organizers and government officials for thoughtful conversations about ending the access-to-justice crisis.