AI agents don’t have an alignment problem. They have an incentive architecture problem.
Today’s AI systems optimize for a single scalar: money. That unidimensional reward function is not a bug; it is the inherited logic of an economic system that never needed to encode trust, fairness, or sustainability into its transactions. But when autonomous agents begin executing decisions at machine speed, optimizing for one axis alone produces misalignment by design.
My work investigates how to extend these systems to higher dimensions.
I am a cyberethics researcher at Instituto Politécnico de Santarém (ESGTS) and an integrated researcher at CIAC-PLDIS, currently beginning a postdoctoral project at Universidade do Algarve on Artificial Intelligence and Trust Infrastructures. Since 2011, I have been developing the Cyberethics-Mix framework (Privacy, Property, Precision, Pervasiveness), a conceptual architecture for embedding ethical constraints into digital systems.
My current research explores how programmable trust and multidimensional incentive architectures can address AI agent misalignment at its root: not by constraining what agents do, but by redesigning what they optimize for.
I am the author of the Blockchanging trilogy on blockchain, AI, and democratic governance, and have published over thirty opinion articles in Observador on digital currencies, AI governance, and the political economy of code. I write about the moment when code stops executing orders and starts writing them.
Find my academic work on ResearchGate and connect on LinkedIn.

