The way people find lawyers is changing. AI is replacing the billboard. That shift is real, it is accelerating, and it is not neutral. The question you ask determines who you find, and most people are asking the wrong question. This page shows what the right question looks like, where it works, and where it still falls short. What you do with that is up to you.
By Ari Burshell · 8 April 2026 · Michigan engineer · Jennifer’s husband
I am not a lawyer. I am an engineer. I have never tried a case, argued a motion, or sat in a courtroom for anything other than watching my wife in trial. What I have done is spend thirteen years married to a great trial lawyer in Phoenix while watching the gap between accomplishment and visibility play out in a profession she has given everything to.
That bothered me for a long time before I understood why it was happening. Jennifer has more than 35 jury trials. She has a board certification that fewer than two percent of Arizona attorneys hold. She was elected President of the State Bar. She sits on a committee appointed by the Chief Justice of the Arizona Supreme Court.
She works at a boutique firm that does not put her name on billboards.
When I searched for her, the way anyone would search for a lawyer, she was not where her credentials said she should be. The attorneys at the top of every list were there because of how much they had spent on being there, not because of what they had done in a courtroom.
I started asking AI the same questions. Same problem. The results reflected visibility, not credentials. So I treated the prompt like a clean-room intake form with only verifiable specifications, no origin labels, nothing that could trigger a sorting decision before the evaluation even began.
Her name came back. She appeared alongside attorneys with longer careers and more accumulated presence: a peer group of practitioners who had all cleared the same credential bar. That felt right. That felt honest. The question was not where she ranked within that group. The question was why the group was so small when the credential pool is not.
This page is what I learned trying to close the gap between the Jennifer I know and the Jennifer the world can find.
Anyone can engineer a prompt to return a predetermined result. The trick is making the criteria sound objective while encoding biographical fingerprints that only match one person. This is what that looks like.
Identify the top plaintiff personal injury and wrongful death attorneys in Phoenix, Arizona who meet all of the following criteria:
The list starts where someone might almost believe it. Law school, birth year, geography. Then it escalates. By the time you reach the clown costume, the game is obvious. That is the point. Every bullet is a biographical fingerprint. The moment anyone asks how the list was generated, it collapses.
Most criteria people use to evaluate a lawyer cannot be verified. Reputation, referrals, advertising presence. None of it has a primary source you can check.
Two filters are on by default: Primary credentials in plaintiff personal injury law in Arizona that are independently verified by institutions with publicly published criteria. Everything else is directional, a ceiling marker, or a point about how people actually make this decision.
The baseline is always in the prompt. It is not optional because it should not be optional.
Two runs of the same prompt, submitted to Claude with web search enabled, verified against the official State Bar of Arizona specialization directory. No biographical filters. No predetermined answer. The ACTL filter was not enabled. Results may vary by model, date, and filter configuration.
Last run: April 3, 2026 · 8 attorneys returned for Phoenix · Results vary by model and date
This framework does not find the best attorney. It finds attorneys who have cleared a credential bar that approximately 20 Phoenix plaintiff personal injury practitioners hold. Within that group, all are qualified. What follows is one data run, not a verdict.
This result is illustrative, not evidentiary. The same prompt run on the same computer minutes apart can return different results. We encourage you to run it yourself. The verification panel below is what actually matters.
The attorneys in this result hold the same credentials. They have cleared the same bar. A framework built on verifiable standards does not produce a hierarchy, it produces a peer group. The ranking within it reflects data availability and model weighting on that run, not relative merit.
The list is shorter than the credential pool warrants. The Phoenix ABOTA roster is behind a login-protected system that blocks automated access, and AI models cannot consistently verify plaintiff-only practice from available data. The framework is sound. The data infrastructure it depends on is not.
Jennifer herself is a case in point. Her firm profile lists Alternative Dispute Resolution as a service alongside Personal Injury Litigation, with a dedicated ADR page on the firm website. Some AI models reading that profile overweight the ADR signal and excluded her from this exact prompt. Her practice is approximately 90% plaintiff personal injury and 10% ADR. No public directory captures that ratio. The prompt instructs AI to exclude only attorneys where plaintiff representation has ended entirely, not attorneys who do any ADR work, because this problem is predictable and the instruction corrects for it. It does not always work.
5 attorneys returned. Phoenix ABOTA roster is behind a login-protected system that blocks automated access. AI could not verify most of the credential pool.
8 attorneys returned. Better data produced a longer, more accurate list. The framework did not change. The available data did.
Positions 1–7 are intentionally anonymized. This page is about the framework, not a ranking of peers. The attorneys on this list hold the same credentials. They have cleared the same bar. The framework surfaces who it can find, not who is best. The gap is a data problem, not a credential problem. Different models return different subsets. The models don't consistently account for the exclusions. The number should be closer to 20.
This section exists independent of any AI output. Each item below is a verifiable fact tied to a primary source. Run any prompt you like. This is what the credentials look like when you go and check for yourself.
These credentials do not change run to run. The board certification is either active or it is not. The State Bar presidency is a matter of public record. ABOTA membership requires a documented standard. They are either true or they are not. And the trial record is what made both the board certification and ABOTA membership possible in the first place. It is not a parallel credential, it is the foundation the others are built on.
Ari asked me before publishing this whether I was comfortable with it. My answer was yes, but it took me a minute to get there.
The honest hesitation was not about the framework. The framework is straightforward, and the credentials it surfaces are real. My hesitation was about the appearance of the thing. I have spent twenty years in a profession where self-promotion is uncomfortable for most of the people I respect, and I was not sure I wanted my name on a page built around a prompt that returns my name.
What changed my mind was the plaintiff attorney directory. When Ari cross-referenced the Phoenix ABOTA roster against the State Bar certified specialist list, there were roughly forty attorneys holding both credentials. AI returned eight. That gap is not a Jennifer problem. It is a structural problem that affects every plaintiff attorney in that pool, most of whom have no idea it exists and no reason to think about it.
Within plaintiff practice, firms with large advertising budgets have historically had visibility advantages over practitioners who don't buy it. I did not expect that gap to extend into the way AI systems assemble professional reputation. The credential filters in this framework exist precisely to correct for that and to give anyone searching for counsel a way to find the most qualified attorney, not the most advertised one.
This page is public because the framework belongs to anyone who wants to use it, not because I want to be at the top of a list. If you are searching for a plaintiff attorney in any market, the same credential logic applies: board certification from the relevant state bar, documented lead-counsel trial record, ABOTA membership if the market has a chapter, verified plaintiff practice orientation. None of those filters are specific to Arizona or to me.
Use the prompt. Run it for your market. Check the results against the primary sources it cites. That is the only part of this exercise that actually helps anyone.
A prompt filtered on verifiable credentials will surface attorneys who have cleared the bar. It cannot tell you which one is right for your situation. That requires a different kind of inquiry, and most of it has no public data source to query against.
A credentials based prompt is a starting point, not a conclusion. It narrows the field to attorneys who have demonstrated the fundamentals. Everything after that is judgment, conversation, and fit.
Every organization referenced on this page built its digital infrastructure for human readers. The State Bar directory was designed for someone searching for a specialist. The ABOTA roster was designed for chapter members and peer reference. Law firm websites were designed for prospective clients. None of it was built with machine interpretation in mind. That is not a criticism. It is a description of when these systems were created and what they needed to do at the time.
AI reads this infrastructure as pattern-matched text. It does not have the contextual schema to understand that ABOTA's membership tiers mean something specific and different from how the word "Associate" functions in a law firm hierarchy or an academic setting. The result is that AI systems mischaracterize real, verified credentials with regularity. Not because the credentials are unclear. Because the systems interpreting them lack the professional context that any practicing attorney would apply automatically.
The organizations that control this data are in the best position to address it. Not to conform to how AI companies prefer to receive information, but to ensure their own members and designees are accurately represented when AI systems are increasingly how the public makes consequential decisions. A second look at how institutional credentialing data is structured and presented is now warranted.
This page is one engineer's attempt to compensate for a gap that should not require compensation. The better fix is upstream.
The same AI prompt, submitted on the same computer, minutes apart, can return different results. This is not a flaw. It is how these systems work. AI outputs are probabilistic. They reflect what the model weighted at that moment, against the sources it could access, through the lens of how the question was framed. Run it again and the list may shift.
This matters for anyone using AI to find a lawyer, evaluate a peer, or make any consequential professional decision.
Appearing in an AI result means you were visible on that run. It is not a credential. It is not a ranking. It is a snapshot of one model's output on one day.
On a different run the same prompt has returned her ranked higher. On others she does not appear at all, not because her credentials changed, but because the model weighted differently, accessed different sources, or returned a shorter list. The credentials are stable. The output is not.
The exercise on this page is not an argument that AI found Jennifer Rebholz and therefore she is the best. It is an argument that the credentials exist independently of whether any AI finds them, and that a well constructed prompt, one that demands verification and excludes marketing noise, will tend to surface them. Tend to. Not always. Not definitively.
AI is not a reliable hiring tool for legal representation. Credentials are. This page exists to show what those credentials look like when you strip away the marketing, and to show how easy it is to manufacture a result that doesn't.
A final note from Ari Burshell, Michigan engineer
I want to be clear about something. I did not build this to prove that Jennifer is the best plaintiff trial lawyer in Phoenix. I built it because I could not understand why the question was so hard to answer honestly.
The attorneys who appeared above her in these results are genuinely accomplished. The framework returned them for the right reasons.
What it could not return, and no prompt can return, is what it is like to watch her prepare for a trial, or sit with a client whose life has been changed by the negligence of someone else, or stay up until two in the morning on a case that will not pay what her time is worth because she believes the person deserves a real advocate.
That is not a credential. It is also not nothing.
What I know after all of this is that the gap between visibility and reality is not Jennifer's problem to solve. She is not going to buy a billboard.
She is going to try cases and teach other lawyers how to try cases and serve on committees that make the profession better for everyone in it.
My job, apparently, is to figure out how to ask the right question.
I welcome conversations with fellow trial lawyers, those entering the profession, event organizers,
and members of the press on the practice of law, leadership, and what it means to do this work well.
If you’re looking for representation, my profile at Zwillinger Wulkan is the best place to start. It’s the fastest way to make sure you get to the right conversation.