Blockchain Engineer
FoundersBookmark Details
The Global Centre for Risk and Innovation (GCRI) is an international nonprofit in Special Consultative Status with the United Nations ECOSOC, leading the architecture of sovereign-scale digital infrastructure across Disaster Risk Reduction (DRR), Disaster Risk Finance (DRF), and Disaster Risk Intelligence (DRI). Through its flagship initiative—the Nexus Ecosystem (NE)—GCRI is building a global digital backbone for simulation-based governance, anticipatory risk systems, smart finance, and multilateral policy innovation. The system integrates advanced AI, blockchain, Earth observation, and decentralized identity technologies into a unified, open-source infrastructure for national and multilateral institutions.
We are assembling a foundational team of world-class blockchain engineers and cryptographic systems builders to design and deploy the next generation of AI-on-chain infrastructure.
Our objective: to build a globally composable verifiable compute layer that enables secure, decentralized execution of AI workloads—on CPUs, GPUs, and TEEs—with cryptographic guarantees across every inference, simulation, and contract-enforced decision. This is not a wrapper around LLMs or a DeFi experiment with AI. It’s an entirely new class of infrastructure that treats AI as a trust-minimized protocol, embedding proof-of-compute, verifiable inference, multi-node aggregation, and programmable verification logic directly into on-chain environments. We are building toward the core trust layer of the future internet—where every model run, every agent decision, and every financial or policy action is provable by default.
Who We’re Looking For
Founding engineers, protocol architects, and system designers with deep expertise across:
- Blockchain infrastructure and execution environments
- Trusted compute (TEEs, secure enclaves)
- ZK-proof systems and programmable verification
- AI/ML inference pipelines
- Hardware-aware compute scheduling (CPUs, GPUs, FPGAs)
- Distributed systems and modular consensus design
You should have an obsession with correctness, reproducibility, and verifiability, and the technical fluency to build multi-layered systems that connect secure off-chain compute with trustless on-chain logic.
Core Responsibilities
- Design Verifiable Compute Layer
- Architect verifiable compute protocols that accept AI model inference as input and return cryptographic proofs of execution, bounded cost, and integrity checks.
- Enable integration of heterogeneous compute backends (CPU, GPU, TEE) into unified proof-of-execution formats.
- Implement TEE and zk-Compute Runtimes
- Build enclave-based and ZK-compatible execution environments that run inference workloads and emit signed, attestable outputs (e.g., using SGX, Enarx, RISC Zero, or zkVMs).
- Support secure orchestration of multi-node inference with differential trust assumptions.
- Build On-Chain Verification Engines
- Develop smart contracts and proof-verification logic to validate compute outputs, model signatures, and origin attestations.
- Integrate clause logic to bind execution proofs to downstream contracts (e.g., disbursements, governance actions, autonomous agents).
- Aggregate Compute & Proof Systems
- Implement cryptographic aggregation for multi-inference jobs: batch proof composition, hierarchical attestations, and model provenance chains.
- Design secure scheduling and incentive layers to coordinate verifiable compute workloads across distributed nodes.
- Contribute to Foundational Protocol Design
- Work with protocol researchers and governance engineers to define incentive systems, fee markets, staking for execution guarantees, and dispute resolution mechanisms.
- Co-author whitepapers, open standards, and open-source reference implementations.
Required Skills
- Strong backend systems and protocol engineering skills (Rust, Go, Solidity, or zkDSLs such as Noir, Circom, Cairo)
- Deep familiarity with EVM, L2s (e.g., zkEVM, Optimism), rollup frameworks, or custom VM environments
- Hands-on experience with ZK tooling (e.g., Halo2, zkSync, RISC Zero, Polygon zkEVM, SnarkJS)
- Familiarity with TEE development (SGX enclaves, Keystone, SCONE, or Enarx)
- Understanding of distributed scheduling, compute orchestration, or parallel job queues (bonus: prior work with GPU clusters, Slurm, Ray, etc.)
- Experience building verifiable systems in adversarial settings: staking-based economics, dispute protocols, or MPC/ZKP in production
- Ability to think across the stack: from cryptographic circuit design to on-chain validation to agent-level interaction models
- Strong contributor in open-source or collaborative protocol environments
About You
- You’ve written low-level code for compute or consensus runtimes, and know what trust boundaries actually mean in practice.
- You’ve worked on a chain, a ZK proof system, a TEE-enabled execution model—or better, all three.
- You don’t believe in hand-waving or unverifiable abstractions. You build systems that can be audited, simulated, and proven—across hardware, software, and execution layers.
- You are deeply excited by the idea that verifiable AI execution may be the defining protocol primitive of the next internet epoch—and want to be part of the team that makes it real.
Why This Matters
We are building infrastructure that solves three converging challenges at once:
- AI cannot be blindly trusted. We must design systems where every inference is independently provable and auditable—especially when AI makes real-world decisions.
- Compute must be sovereign and trust-minimized. Centralized, opaque AI-as-a-service models will never meet the requirements of public infrastructure, finance, or policy execution.
- Coordination must be programmable. As AI agents interact across financial, legal, and sovereign domains, we need clause-aware, proof-enforced coordination mechanisms built into the protocol layer.
We believe the stack must be reimagined—starting from verifiable compute, cryptographic trust, and programmable execution logic—and that this team will build the foundation.
- Build the Core Protocol Layer for AI Verifiability: Lead the development of the foundational infrastructure where every AI inference, simulation, and autonomous agent action is cryptographically provable and enforceable.
- Work at the Convergence of AI, ZK, and TEE Systems: Design and implement the architecture that brings together trusted compute (SGX/Enarx), zero-knowledge proofs, and on-chain smart contract logic—into a unified verifiability layer.
- Shape the Future of Decentralized AI Infrastructure: Define new execution models for AI in adversarial environments—where trust, reproducibility, and sovereignty are not optional, but foundational.
- Collaborate with World-Class Researchers, Sovereigns, and Protocol Teams: Work alongside institutions, UN-aligned initiatives, and top-tier engineers contributing to public infrastructure, climate tech, and mission-critical AI governance.
- Gain Deep Ownership in High-Impact Open Systems: As a founding contributor, gain meaningful equity/token participation and long-term governance rights over infrastructure that will underpin critical systems of the future internet.
- Publish, Lead, and Open Standards for a New Class of Applications: Co-author whitepapers, define open protocols, and shape the standards for how verifiable AI-on-chain becomes interoperable, deployable, and trusted at scale.
Share
Facebook
X
LinkedIn
Telegram
Tumblr
Whatsapp
VK
Mail