Global Risks Forum 2025
Micro-production Model (MPM)

Bounties

Bounties are precision-scoped, performance-based tasks that support the open-source development of critical components across the Nexus Platform. Contributors tackle specialized challenges in HPC modeling, parametric finance, AI-based early warning, or data harmonization—earning pCredits for validated outputs. Governed under GRA’s technical and RRI review structures, Bounties power modular, auditable risk tools deployable at national and global scale. As the core engine of the Nexus Micro-Production Model (MPM), Bounties align expert contribution with operational needs—bridging innovation with impact in disaster risk reduction, financing, and intelligence

Image link

Bounties are targeted, higher-value tasks that require specialized skills and a tangible outcome—like refining a parametric finance script or developing advanced analytics for hazard data. Unlike Quests (which often focus on smaller on-ramp tasks), Bounties typically have structured outputs, robust peer reviews, and integration paths that demand more thorough domain knowledge

Any recognized contributor—from institutional members to domain specialists—can propose a Bounty if they identify a significant challenge in DRR, DRF, or DRI that benefits from open collaboration. The Bounty must pass a short approval cycle, ensuring alignment with RRI, strategic objectives, and data ethics guidelines

Bounties are modular development units. Each Bounty defines its scope, deliverables, and RRI constraints. By working in these discrete, trackable chunks, contributors can efficiently tackle real-world challenges (e.g., verifying risk finance triggers, building new dashboards) without losing synergy across the larger open innovation ecosystem

Bounties may involve data ingestion improvements, advanced analytics, geospatial model validations, parametric instrument expansions, or integration of domain knowledge (such as local hazard references). They often require coding, data science, design thinking, or policy expertise to yield high-impact outputs for risk management or finance solutions

Each Bounty is reviewed and governed under RRI guidelines. Technical tasks must include disclaimers or ethical reflections (e.g., potential biases in data sets or local sovereignty issues), while policy-centric tasks often require multi-stakeholder feedback. Participants document these considerations to ensure the solution remains transparent, inclusive, and socially responsible

Successful Bounty completion yields participation credits (pCredits) or partial validation credits (vCredits) if the task demands advanced review. Contributors also gain visibility, an enhanced reputation across Nexus Platforms, and deeper engagement rights (like proposing further expansions or leading certain Build sprints)

Bounties typically require multi-phase approval: (1) a technical review by domain colleagues to confirm solution quality, (2) an RRI oversight check for data or policy compliance, and (3) possible local stakeholder feedback if relevant to parametric triggers or cross-border governance. Final acceptance triggers the awarding of pCredits or partial vCredits

Yes. If new data emerges or local conditions shift, existing Bounties can be repeated with updated parameters (e.g., new climate data sets). Bounties can also be extended to incorporate larger tasks or forked by domain subgroups who adapt the deliverable for another region or hazard type, thereby fostering iterative open development

Bounties are building blocks for multi-stakeholder “Builds.” Teams may compose multiple Bounties into a cohesive pipeline—e.g., verifying hazard polygons, refining parametric logic, and building real-time dashboards. Once completed, these Bounties collectively yield a final deployable module or cross-regional initiative that addresses a critical risk problem

Depending on the scope, Bounties may yield data pipelines, parametric contract code, advanced risk dashboards, or domain-focused reports for policymaking. Each deliverable is version-controlled, documented with disclaimers, and integrated into the open environment to ensure it remains scalable, ethically valid, and adaptable for local or global DRR, DRF, and DRI needs

Responsible Management Of Chemicals And Waste
10 Steps
Building Resilience To Environmental, Economic And Social Disasters
9 Steps
Substantially Reducing Corruption And Bribery
7 Steps
  • Complete Onboarding Process
  • Join Peace, Justice and Strong Institutions
  • Complete the course Systemic Risks
  • Complete the course Political Risks
  • Join Seminars
  • Join Hackathons
  • Get nominated for CoI Level I
Preventing And Treating Substance Abuse
9 Steps
Image link

Open-Source Early Warning Mobile Application

Develop an open-source mobile application that provides real-time, location-based early warning alerts for extreme weather events. The application should conform to Common Alerting Protocol (CAP) standards, integrate trusted data sources, and include multilingual support for global usability.

Early warnings for extreme weather events are critical to reducing loss of life and property damage. However, many existing systems either lack local context or fail to deliver timely alerts. By integrating multiple official data feeds (e.g., from NOAA, WMO) and local crowd-sourced reports, this project aims to create a reliable and widely accessible early warning app. The solution will follow CAP standards to ensure consistency and compatibility with global alerting systems, and it will include modular, open-source components to facilitate adaptation in diverse regions.

This initiative will produce a mobile app that provides real-time, geolocated alerts based on standardized alerting protocols and verified data sources. The open-source nature of the project will enable adaptation for various languages, regions, and hazard types. Accompanying documentation and deployment instructions will make it easy for local governments and NGOs to adopt the system, improving emergency preparedness worldwide.

Target Outcomes:

  • A fully functional mobile app that meets CAP standards.
  • Integration with multiple trusted data sources and multilingual support.
  • Comprehensive documentation and open-source codebase for global replication.
10 Steps

GIS-Based Urban Heat Island Analysis Tool

Design an advanced GIS platform that analyzes urban heat islands (UHIs) using multi-source geospatial data, remote sensing imagery, and IoT temperature readings. The solution should follow established geospatial data standards (e.g., OGC standards, ISO 19157 for data quality) and provide actionable heat mitigation strategies.

Urban heat islands pose significant health and energy challenges, particularly in rapidly growing cities. Understanding spatial and temporal heat distribution patterns is key to crafting effective mitigation strategies. This project will incorporate data from multiple sources—such as Landsat satellite imagery, IoT-enabled temperature sensors, and municipal land-use datasets—and process it using industry-standard GIS frameworks (e.g., QGIS, ArcGIS). By adhering to open geospatial data standards and providing standardized output formats (GeoTIFF, shapefiles), the platform will facilitate integration with city planning tools.

The project will produce an open-source GIS tool that maps UHIs with high spatial and temporal resolution. It will provide detailed analysis and recommendations for urban planners, helping reduce heat exposure and improve city livability. By leveraging OGC-compliant data formats and publishing all algorithms and workflows, the tool will ensure reproducibility, scalability, and broad adoption.

Target Outcomes:

  • A GIS platform that meets OGC and ISO standards for geospatial data quality.
  • Demonstrated case studies showing the effectiveness of recommended heat mitigation strategies.
  • Open-source code, data workflows, and documentation.
10 Steps

AI-Powered Disease Surveillance Platform

Develop a machine learning-driven disease surveillance platform that aggregates and analyzes syndromic surveillance data, social media signals, and environmental indicators. The platform should comply with WHO’s International Classification of Diseases (ICD) standards and include robust data privacy measures.

Traditional disease surveillance methods often lag behind the speed of disease spread. A modern approach must leverage AI to analyze multiple data sources simultaneously, identifying outbreaks before they escalate. This project will apply advanced AI techniques (e.g., natural language processing for social media analysis, graph-based models for contact tracing) and align with international health data standards. It will also incorporate data governance frameworks (e.g., GDPR compliance, HL7 FHIR standards) to ensure responsible data handling.

This initiative will produce a disease surveillance platform that integrates AI-driven insights, syndromic surveillance data, and environmental triggers. By complying with international standards for health data and privacy, it will enable public health authorities to respond faster and more effectively. The resulting solution will be open-source, accompanied by extensive documentation and training materials.

Target Outcomes:

  • A functional AI platform with integrated data streams and predictive capabilities.
  • Compliance with ICD and HL7 FHIR standards.
  • Published benchmarks demonstrating faster outbreak detection compared to current systems.
10 Steps

AI-Augmented Policy Analysis Engine

Develop an AI-driven policy analysis platform that extracts insights from large-scale legislative and policy datasets. The platform should integrate natural language processing (NLP) models and adhere to standards like ISO 22397 for information exchange and OECD guidelines for policy data documentation.

Governments and organizations face an overwhelming volume of complex policy documents, making it difficult to identify best practices or predict policy outcomes. By applying NLP techniques—such as topic modeling, sentiment analysis, and knowledge graph construction—this bounty aims to streamline the analysis process. The system will ingest structured and unstructured policy data, analyze trends and impacts, and present actionable insights in a user-friendly format.

This project will deliver an AI-powered tool that uses cutting-edge NLP techniques to extract, summarize, and visualize policy impacts. The platform will align with ISO and OECD standards, ensuring that data sources and analytical methodologies are transparent and reproducible. By making the codebase and analytical pipelines open-source, the solution will serve as a foundation for further research and application in the policy domain.

Target Outcomes:

  • An AI platform that processes and visualizes policy data following international standards.
  • Analytical reports that identify trends, best practices, and potential policy impacts.
  • Open-source code and detailed technical documentation.
10 Steps

AI-Driven Food Security Monitoring Dashboard

Develop a highly interactive dashboard powered by artificial intelligence, capable of analyzing multi-source data—remote sensing imagery, soil condition reports, and market price indices—to provide early warnings and actionable insights into food security risks. The system should align with internationally recognized agricultural data standards (e.g., FAO’s AGRIS standards) and employ cutting-edge visualization frameworks.

Global food systems face increasing threats from climate variability, supply chain disruptions, and resource constraints. This challenge demands a data-driven, predictive approach. By employing advanced AI techniques—such as convolutional neural networks (CNNs) for analyzing satellite imagery and gradient boosting algorithms for crop yield prediction—this project will create a comprehensive platform. The dashboard will adhere to Open Data standards (e.g., FAIR principles) and integrate with widely used agricultural data models (e.g., ISO 19156 Observations and Measurements).

This initiative will produce a food security dashboard built on open-source technologies and standardized data formats, enabling seamless integration into existing agricultural monitoring systems. The platform will support predictive analytics workflows, from data ingestion and preprocessing to model deployment and interactive visualization. Documentation will detail how to replicate and extend the dashboard’s capabilities, ensuring its usability across diverse regions and user groups.

Target Outcomes:

  • A machine learning-powered dashboard compliant with international agricultural data standards and FAIR principles.
  • High-accuracy predictive models for crop yields, market trends, and climate impacts.
  • Comprehensive technical documentation and an extensible codebase.
10 Steps

Quantum Simulation Framework for Climate Risk Scenarios

Develop a quantum computing-based simulation framework that enables large-scale, high-precision modeling of climate risk scenarios. The framework should adhere to emerging quantum standards and open data protocols, leveraging quantum-enhanced optimization techniques to model complex climate interdependencies.

Conventional simulation methods often struggle to handle the intricate interactions between climate variables, socioeconomic factors, and ecosystem responses. Quantum computing’s ability to perform certain types of computations exponentially faster than classical approaches offers a transformative opportunity. This project will leverage quantum algorithms, such as Variational Quantum Eigensolvers (VQE) for optimization and quantum Monte Carlo methods for probabilistic scenarios. It will integrate these approaches with standardized environmental datasets, following guidelines like the Copernicus Climate Data Store (CDS) formats and the Open Energy Modelling Framework (oemof).

This bounty aims to create a proof-of-concept quantum simulation framework that demonstrates significant improvements in processing time and scenario accuracy. It will adhere to existing climate data standards and incorporate reproducible workflows. By publishing all algorithms and data workflows as open-source resources, this project will provide a foundational tool for researchers, policymakers, and industry stakeholders to better anticipate and mitigate climate risks.

Target Outcomes:

  • A validated quantum simulation framework benchmarked against classical methods.
  • Open-source code and accompanying datasets, compatible with existing climate data platforms.
  • Detailed documentation on the quantum algorithms used, ensuring reproducibility and adoption by the research community.
10 Steps

Distributed Ledger for Supply Chain Resilience

Create a blockchain-based solution that enhances the traceability and transparency of critical supply chains, ensuring continuity and integrity in times of crisis. The platform should conform to global supply chain standards and frameworks, such as GS1 standards for product identification and ISO 28000 for supply chain security.

Disruptions in supply chains during emergencies can lead to severe economic and humanitarian consequences. A blockchain-powered approach can provide real-time visibility into supply chain transactions, improve accountability, and facilitate rapid response. By integrating globally recognized standards, such as GS1’s EPCIS (Electronic Product Code Information Services) and ISO 22095 chain-of-custody requirements, this solution ensures a secure, interoperable environment. Trusted oracles and industry-grade blockchain networks (e.g., Hyperledger, Ethereum) will provide the foundational infrastructure.

This bounty focuses on developing a blockchain-based platform that delivers secure, verifiable, and standards-compliant supply chain transparency. By following international frameworks and providing a clear audit trail of transactions, the solution will improve resilience, reduce waste, and help ensure the timely delivery of critical goods during disruptions. Comprehensive implementation documentation and open-source smart contract libraries will make the system accessible and scalable.

Target Outcomes:

  • A blockchain prototype that integrates GS1 and ISO 28000 standards.
  • Demonstrable improvements in supply chain visibility and responsiveness.
  • Open-source smart contracts and deployment guidelines.
10 Steps

Blockchain-Based Disaster Relief Payout Protocol

Design and implement a blockchain-enabled payout system using parametric insurance models and automated smart contracts. This system will streamline disaster relief payouts by adhering to international regulatory frameworks, integrating trusted oracles for real-time data verification, and providing a transparent ledger for all transactions.

Traditional disaster relief funding often suffers from inefficiencies, lack of transparency, and prolonged distribution times. A blockchain-based parametric model—where payouts are triggered by specific, pre-defined criteria (e.g., rainfall thresholds)—can resolve these challenges. This solution will be built on well-established blockchain platforms, such as Ethereum or Hyperledger Fabric, and use smart contract standards like the ERC-20 or ERC-721 for payout tokens. Integration with trusted data oracles (e.g., Chainlink or Provable) ensures that triggers are based on verified, tamper-proof information. This approach will also consider international frameworks for financial inclusion and disaster risk financing, such as those recommended by the World Bank and the Insurance Development Forum (IDF).

The proposed system will include a suite of blockchain-based smart contracts that automate relief fund distribution upon the occurrence of a verified event. Compliance with international financial reporting standards (e.g., IFRS 17 for insurance contracts) and best practices for blockchain security (e.g., OWASP Blockchain Security Framework) will be integral. This ensures that payouts are not only prompt but also fully auditable and secure. The open-source implementation will include smart contract templates, deployment scripts, and a detailed integration guide for humanitarian organizations and insurers

Target Outcomes:

  • A fully functional smart contract framework compliant with ERC standards and integrated with trusted data oracles.
  • A secure and transparent audit trail of all payouts, following established blockchain security standards.
  • A publicly available deployment guide that supports implementation in multiple jurisdictions.
10 Steps

Smart Water Resource Management System

Develop a distributed IoT platform that monitors water quality, usage, and availability in real-time, conforming to international IoT standards (e.g., ISO/IEC 30141 IoT Reference Architecture) and environmental data protocols. This system will provide a reliable, low-cost solution for managing water resources in water-stressed regions.

Water scarcity is a global crisis that affects billions. Efficient, data-driven water resource management requires continuous, reliable monitoring systems. By deploying IoT devices (e.g., sensors for measuring turbidity, pH, and flow rates) and integrating them into a unified cloud-based platform, this project will enable real-time insights into water system performance. The solution will leverage secure communication protocols (e.g., MQTT with TLS) and adhere to industry frameworks such as the Industrial Internet Consortium’s (IIC) Connectivity Framework and the OGC SensorThings API for sensor data interoperability.

This bounty will deliver a low-power, high-reliability IoT system that collects, transmits, and analyzes water data in real-time. The system will follow established IoT and environmental data standards, ensuring scalability and integration with broader water management initiatives. The resulting platform will be fully documented, from hardware deployment to software integration, providing a blueprint for replication in other regions.

Target Outcomes:

  • A validated IoT architecture that meets ISO/IEC 30141 standards.
  • Demonstrated improvements in water resource allocation and quality monitoring.
  • A comprehensive implementation guide with hardware and software specifications.
10 Steps

Dynamic Coastal Flood Risk Prediction Model

Develop a scalable, machine learning-driven framework that integrates multi-modal Earth observation data, historical meteorological datasets, and real-time oceanographic observations to deliver coastal flood forecasts with a 48-hour lead time. The solution should incorporate international standards, interoperable data formats, and robust validation protocols to ensure reliability and scalability across multiple coastal regions.

Coastal flooding is among the most costly and frequent natural disasters, intensified by climate change and rapid urbanization. Current forecasting methods often lack the precision, granularity, or timeliness required for proactive response measures. To address these limitations, the proposed solution will utilize open data standards such as the OGC (Open Geospatial Consortium) Web Map Service (WMS) and NetCDF conventions, as well as widely recognized hydrodynamic modeling frameworks. By combining advanced machine learning algorithms—trained on historic flood events—with real-time observational data streams, this initiative aims to produce a predictive model that meets the stringent requirements of emergency management and infrastructure protection.

The resulting predictive system will leverage state-of-the-art AI frameworks (e.g., TensorFlow, PyTorch) and follow geospatial data standards (e.g., ISO 19115 for metadata, ISO 19128 for web map services). It will provide coastal cities with a robust decision-support tool for preemptive action, enabling emergency planners to deploy resources more effectively. The implementation will be fully documented with industry-standard practices, including model validation procedures, data source integration workflows, and API specifications for seamless integration with existing disaster management platforms.

Target Outcomes:

  • A machine learning model validated against multi-year flood data records with at least 90% accuracy in event prediction.
  • Compliance with OGC standards for geospatial data dissemination and ISO frameworks for data quality.
  • A comprehensive open-source deployment package including Docker containers, RESTful APIs, and CI/CD pipelines.
10 Steps
Have questions?