Designing trustworthy systems at scale.

Reliable for the people who depend on them. Resilient against the people who try to break them. One discipline, across AI, robotics, and marketplace integrity.

Work across:

Amazon Robotics Amazon Merchant Risk Microsoft via FiveBy Johns Hopkins SAIS

I did not come up through product management. I came up through sanctions and export control, which turns out to be an unusually good place to learn how trust actually works.

Those domains look unrelated until you spend enough time inside them. In sanctions work, the useful question is not what control should exist in the abstract, but which actor is in front of you, what they are trying to do, and how the current system fails to distinguish them cleanly. In robotics, the adversary is different, but the discipline rhymes: operational chaos, brittle assumptions, and economic structures that quietly make trust impossible.

So this site is not a catalog of projects so much as a record of one recurring problem: how to design systems that earn legitimacy from the inside out.

Trust is what you get when systems work reliably for the people who depend on them and hold up against the people who try to break them.


Two case studies, two domains, one underlying discipline.

Both stories are about the same thing: what has to change before a system can actually be trusted at scale.

The three questions that made Sparrow trustworthy.

In 2022, Sparrow was working — and that was the problem.

A bespoke manipulation system with high build cost, uneven availability, and a long payback period is an engineering success and a product failure. It meant Sparrow could exist in fulfillment centers, but it could not yet belong in them.

Over the following eighteen months, the work resolved into three questions: could the business afford to trust it, could the platform scale it, and would the people closest to it actually run it? The answers were economic, architectural, and operational, but they were all layers of the same trust problem.

Read the Sparrow case study

The adversary and the amateur.

The system improved when the question changed from controls to actors.

Amazon Merchant Risk had a controls-first default. The more useful move was actor-first: who is the adversary, what are they trying to do, how are they evading detection, and what happens when a system treats coordinated bad actors and legitimate new sellers as the same class?

That shift led to three linked moves: an entity-linkage database, direct adversary research, and a risk appetite framework that made enforcement both sharper and less punishing. The result was not just better detection, but a more proportional system.

Read the Merchant Risk case study

How the domains connect.

At FiveBy, I spent years building pictures of adversaries before they acted, work primarily conducted for Microsoft across sanctions, export controls, shell-company structures, and dual-use technology. I carried that habit of attention into Amazon. In Merchant Risk, it showed up as a working assumption that seller fraud was an actor problem rather than an account problem. In Amazon Robotics, it showed up differently: Sparrow's adversary was not a person, but operational chaos, and the answer there was architectural.

These look like different jobs. To me they have always been the same one.

Read more about my background

Greater Boston and Greater DC Metro Area. The work I am usually drawn to sits where technical ambition collides with reliability, adversarial pressure, or operational legitimacy.