I am not a lawyer. I have never tried a case or argued a motion. What I have done is spend thirteen years married to a great trial lawyer while watching the gap between accomplishment and visibility play out in a profession she has given everything to.
Right now, somewhere in Phoenix, someone who just lost a family member to someone else's negligence is searching for a lawyer. They are going to find the attorneys who spent the most on being found. Billboards. SEO. Pay-per-click. The list at the top is not necessarily a list of the most qualified. It is a list of the best-funded. And the person searching has no way to know the difference.
This is not a scandal. It is just the architecture. And for a long time, there was no alternative.
To understand how AI search can be manipulated, here is what a prompt written backward from a predetermined answer looks like. Every item is objective criteria.
Result: one attorney. The list was written backward. This is what prompt manipulation looks like, and why it collapses the moment anyone asks how it was generated.
Criterion 9, verified.
The point is not the clown costume. The point is that criteria can sound objective while being biographical fingerprints. Anyone building a top attorney list for clicks is doing a version of this. The filters just sound more serious.
Strip away the marketing. Ask AI only for things that can be independently verified by institutions with published criteria. What comes back?
A credential-first prompt, one that requires State Bar board certification and ABOTA membership, both independently verifiable, does not return a popularity contest. It returns a peer group. Attorneys who had to earn their way in through trial record and peer vote, not ad spend.
Cross-referencing the Phoenix ABOTA roster with the State Bar certified specialists directory produces exactly 40 attorneys who hold both credentials. Narrowed to plaintiff practice, that number drops to around 20, though practice orientation is not consistently labeled online and AI cannot reliably confirm it. AI returned 8.
The ABOTA roster is login-protected and practice orientation is not consistently labeled online. AI returned 8 of an estimated 20. The gap is a data problem, not a credential problem.
ABOTA Phoenix Chapter. The credential is in the frame.
The way people find lawyers is changing. AI is replacing the billboard. That shift is real, it is accelerating, and it is not neutral.
But it is not fixed. The question you ask determines who you find. And now you know what a better question looks like.
You do not need a billboard to find the right lawyer. You need to know which credentials actually mean something and how to ask for them.
This framework was built for one specific situation: catastrophic personal injury and wrongful death cases in Phoenix, where the stakes are high and the credential bar is meaningful. It is not a general guide to finding a lawyer.
Many good lawyers do not have or need the credentials listed here
Board certification and ABOTA membership are not prerequisites for effective representation. Talented attorneys practice without them every day. This framework surfaces one slice of a large and capable bar.
Build your own criteria
This is a framework, not a verdict. The filters here are verifiable and institutional. Your situation may call for different ones. Use this as a starting point, rewrite the prompt for your needs, and do your own research.
AI is a starting point, not a conclusion
The same prompt run twice can return different results. AI reads what is published, not what is current. Whatever any model returns, verify it against the primary sources before acting on it.
What AI cannot tell you
A credentials-based prompt narrows the field to attorneys who have demonstrated the fundamentals. It cannot tell you about case type fit, recency of trial experience, firm resources, current caseload, stakes alignment, or what it is like to work with that attorney. Everything after the prompt is judgment, conversation, and fit.