Bounties are precision-scoped, performance-based tasks that support the open-source development of critical components across the Nexus Platform. Contributors tackle specialized challenges in HPC modeling, parametric finance, AI-based early warning, or data harmonization—earning pCredits for validated outputs. Governed under GRA’s technical and RRI review structures, Bounties power modular, auditable risk tools deployable at national and global scale. As the core engine of the Nexus Micro-Production Model (MPM), Bounties align expert contribution with operational needs—bridging innovation with impact in disaster risk reduction, financing, and intelligence
Bounties are targeted, higher-value tasks that require specialized skills and a tangible outcome—like refining a parametric finance script or developing advanced analytics for hazard data. Unlike Quests (which often focus on smaller on-ramp tasks), Bounties typically have structured outputs, robust peer reviews, and integration paths that demand more thorough domain knowledge
Any recognized contributor—from institutional members to domain specialists—can propose a Bounty if they identify a significant challenge in DRR, DRF, or DRI that benefits from open collaboration. The Bounty must pass a short approval cycle, ensuring alignment with RRI, strategic objectives, and data ethics guidelines
Bounties are modular development units. Each Bounty defines its scope, deliverables, and RRI constraints. By working in these discrete, trackable chunks, contributors can efficiently tackle real-world challenges (e.g., verifying risk finance triggers, building new dashboards) without losing synergy across the larger open innovation ecosystem
Bounties may involve data ingestion improvements, advanced analytics, geospatial model validations, parametric instrument expansions, or integration of domain knowledge (such as local hazard references). They often require coding, data science, design thinking, or policy expertise to yield high-impact outputs for risk management or finance solutions
Each Bounty is reviewed and governed under RRI guidelines. Technical tasks must include disclaimers or ethical reflections (e.g., potential biases in data sets or local sovereignty issues), while policy-centric tasks often require multi-stakeholder feedback. Participants document these considerations to ensure the solution remains transparent, inclusive, and socially responsible
Successful Bounty completion yields participation credits (pCredits) or partial validation credits (vCredits) if the task demands advanced review. Contributors also gain visibility, an enhanced reputation across Nexus Platforms, and deeper engagement rights (like proposing further expansions or leading certain Build sprints)
Bounties typically require multi-phase approval: (1) a technical review by domain colleagues to confirm solution quality, (2) an RRI oversight check for data or policy compliance, and (3) possible local stakeholder feedback if relevant to parametric triggers or cross-border governance. Final acceptance triggers the awarding of pCredits or partial vCredits
Yes. If new data emerges or local conditions shift, existing Bounties can be repeated with updated parameters (e.g., new climate data sets). Bounties can also be extended to incorporate larger tasks or forked by domain subgroups who adapt the deliverable for another region or hazard type, thereby fostering iterative open development
Bounties are building blocks for multi-stakeholder “Builds.” Teams may compose multiple Bounties into a cohesive pipeline—e.g., verifying hazard polygons, refining parametric logic, and building real-time dashboards. Once completed, these Bounties collectively yield a final deployable module or cross-regional initiative that addresses a critical risk problem
Depending on the scope, Bounties may yield data pipelines, parametric contract code, advanced risk dashboards, or domain-focused reports for policymaking. Each deliverable is version-controlled, documented with disclaimers, and integrated into the open environment to ensure it remains scalable, ethically valid, and adaptable for local or global DRR, DRF, and DRI needs
Develop an open-source mobile application that provides real-time, location-based early warning alerts for extreme weather events. The application should conform to Common Alerting Protocol (CAP) standards, integrate trusted data sources, and include multilingual support for global usability.
Early warnings for extreme weather events are critical to reducing loss of life and property damage. However, many existing systems either lack local context or fail to deliver timely alerts. By integrating multiple official data feeds (e.g., from NOAA, WMO) and local crowd-sourced reports, this project aims to create a reliable and widely accessible early warning app. The solution will follow CAP standards to ensure consistency and compatibility with global alerting systems, and it will include modular, open-source components to facilitate adaptation in diverse regions.
This initiative will produce a mobile app that provides real-time, geolocated alerts based on standardized alerting protocols and verified data sources. The open-source nature of the project will enable adaptation for various languages, regions, and hazard types. Accompanying documentation and deployment instructions will make it easy for local governments and NGOs to adopt the system, improving emergency preparedness worldwide.
Target Outcomes:
Design an advanced GIS platform that analyzes urban heat islands (UHIs) using multi-source geospatial data, remote sensing imagery, and IoT temperature readings. The solution should follow established geospatial data standards (e.g., OGC standards, ISO 19157 for data quality) and provide actionable heat mitigation strategies.
Urban heat islands pose significant health and energy challenges, particularly in rapidly growing cities. Understanding spatial and temporal heat distribution patterns is key to crafting effective mitigation strategies. This project will incorporate data from multiple sources—such as Landsat satellite imagery, IoT-enabled temperature sensors, and municipal land-use datasets—and process it using industry-standard GIS frameworks (e.g., QGIS, ArcGIS). By adhering to open geospatial data standards and providing standardized output formats (GeoTIFF, shapefiles), the platform will facilitate integration with city planning tools.
The project will produce an open-source GIS tool that maps UHIs with high spatial and temporal resolution. It will provide detailed analysis and recommendations for urban planners, helping reduce heat exposure and improve city livability. By leveraging OGC-compliant data formats and publishing all algorithms and workflows, the tool will ensure reproducibility, scalability, and broad adoption.
Target Outcomes:
Develop a machine learning-driven disease surveillance platform that aggregates and analyzes syndromic surveillance data, social media signals, and environmental indicators. The platform should comply with WHO’s International Classification of Diseases (ICD) standards and include robust data privacy measures.
Traditional disease surveillance methods often lag behind the speed of disease spread. A modern approach must leverage AI to analyze multiple data sources simultaneously, identifying outbreaks before they escalate. This project will apply advanced AI techniques (e.g., natural language processing for social media analysis, graph-based models for contact tracing) and align with international health data standards. It will also incorporate data governance frameworks (e.g., GDPR compliance, HL7 FHIR standards) to ensure responsible data handling.
This initiative will produce a disease surveillance platform that integrates AI-driven insights, syndromic surveillance data, and environmental triggers. By complying with international standards for health data and privacy, it will enable public health authorities to respond faster and more effectively. The resulting solution will be open-source, accompanied by extensive documentation and training materials.
Target Outcomes:
Develop an AI-driven policy analysis platform that extracts insights from large-scale legislative and policy datasets. The platform should integrate natural language processing (NLP) models and adhere to standards like ISO 22397 for information exchange and OECD guidelines for policy data documentation.
Governments and organizations face an overwhelming volume of complex policy documents, making it difficult to identify best practices or predict policy outcomes. By applying NLP techniques—such as topic modeling, sentiment analysis, and knowledge graph construction—this bounty aims to streamline the analysis process. The system will ingest structured and unstructured policy data, analyze trends and impacts, and present actionable insights in a user-friendly format.
This project will deliver an AI-powered tool that uses cutting-edge NLP techniques to extract, summarize, and visualize policy impacts. The platform will align with ISO and OECD standards, ensuring that data sources and analytical methodologies are transparent and reproducible. By making the codebase and analytical pipelines open-source, the solution will serve as a foundation for further research and application in the policy domain.
Target Outcomes:
Develop a highly interactive dashboard powered by artificial intelligence, capable of analyzing multi-source data—remote sensing imagery, soil condition reports, and market price indices—to provide early warnings and actionable insights into food security risks. The system should align with internationally recognized agricultural data standards (e.g., FAO’s AGRIS standards) and employ cutting-edge visualization frameworks.
Global food systems face increasing threats from climate variability, supply chain disruptions, and resource constraints. This challenge demands a data-driven, predictive approach. By employing advanced AI techniques—such as convolutional neural networks (CNNs) for analyzing satellite imagery and gradient boosting algorithms for crop yield prediction—this project will create a comprehensive platform. The dashboard will adhere to Open Data standards (e.g., FAIR principles) and integrate with widely used agricultural data models (e.g., ISO 19156 Observations and Measurements).
This initiative will produce a food security dashboard built on open-source technologies and standardized data formats, enabling seamless integration into existing agricultural monitoring systems. The platform will support predictive analytics workflows, from data ingestion and preprocessing to model deployment and interactive visualization. Documentation will detail how to replicate and extend the dashboard’s capabilities, ensuring its usability across diverse regions and user groups.
Target Outcomes:
Develop a quantum computing-based simulation framework that enables large-scale, high-precision modeling of climate risk scenarios. The framework should adhere to emerging quantum standards and open data protocols, leveraging quantum-enhanced optimization techniques to model complex climate interdependencies.
Conventional simulation methods often struggle to handle the intricate interactions between climate variables, socioeconomic factors, and ecosystem responses. Quantum computing’s ability to perform certain types of computations exponentially faster than classical approaches offers a transformative opportunity. This project will leverage quantum algorithms, such as Variational Quantum Eigensolvers (VQE) for optimization and quantum Monte Carlo methods for probabilistic scenarios. It will integrate these approaches with standardized environmental datasets, following guidelines like the Copernicus Climate Data Store (CDS) formats and the Open Energy Modelling Framework (oemof).
This bounty aims to create a proof-of-concept quantum simulation framework that demonstrates significant improvements in processing time and scenario accuracy. It will adhere to existing climate data standards and incorporate reproducible workflows. By publishing all algorithms and data workflows as open-source resources, this project will provide a foundational tool for researchers, policymakers, and industry stakeholders to better anticipate and mitigate climate risks.
Target Outcomes:
Create a blockchain-based solution that enhances the traceability and transparency of critical supply chains, ensuring continuity and integrity in times of crisis. The platform should conform to global supply chain standards and frameworks, such as GS1 standards for product identification and ISO 28000 for supply chain security.
Disruptions in supply chains during emergencies can lead to severe economic and humanitarian consequences. A blockchain-powered approach can provide real-time visibility into supply chain transactions, improve accountability, and facilitate rapid response. By integrating globally recognized standards, such as GS1’s EPCIS (Electronic Product Code Information Services) and ISO 22095 chain-of-custody requirements, this solution ensures a secure, interoperable environment. Trusted oracles and industry-grade blockchain networks (e.g., Hyperledger, Ethereum) will provide the foundational infrastructure.
This bounty focuses on developing a blockchain-based platform that delivers secure, verifiable, and standards-compliant supply chain transparency. By following international frameworks and providing a clear audit trail of transactions, the solution will improve resilience, reduce waste, and help ensure the timely delivery of critical goods during disruptions. Comprehensive implementation documentation and open-source smart contract libraries will make the system accessible and scalable.
Target Outcomes:
Design and implement a blockchain-enabled payout system using parametric insurance models and automated smart contracts. This system will streamline disaster relief payouts by adhering to international regulatory frameworks, integrating trusted oracles for real-time data verification, and providing a transparent ledger for all transactions.
Traditional disaster relief funding often suffers from inefficiencies, lack of transparency, and prolonged distribution times. A blockchain-based parametric model—where payouts are triggered by specific, pre-defined criteria (e.g., rainfall thresholds)—can resolve these challenges. This solution will be built on well-established blockchain platforms, such as Ethereum or Hyperledger Fabric, and use smart contract standards like the ERC-20 or ERC-721 for payout tokens. Integration with trusted data oracles (e.g., Chainlink or Provable) ensures that triggers are based on verified, tamper-proof information. This approach will also consider international frameworks for financial inclusion and disaster risk financing, such as those recommended by the World Bank and the Insurance Development Forum (IDF).
The proposed system will include a suite of blockchain-based smart contracts that automate relief fund distribution upon the occurrence of a verified event. Compliance with international financial reporting standards (e.g., IFRS 17 for insurance contracts) and best practices for blockchain security (e.g., OWASP Blockchain Security Framework) will be integral. This ensures that payouts are not only prompt but also fully auditable and secure. The open-source implementation will include smart contract templates, deployment scripts, and a detailed integration guide for humanitarian organizations and insurers
Target Outcomes:
Develop a distributed IoT platform that monitors water quality, usage, and availability in real-time, conforming to international IoT standards (e.g., ISO/IEC 30141 IoT Reference Architecture) and environmental data protocols. This system will provide a reliable, low-cost solution for managing water resources in water-stressed regions.
Water scarcity is a global crisis that affects billions. Efficient, data-driven water resource management requires continuous, reliable monitoring systems. By deploying IoT devices (e.g., sensors for measuring turbidity, pH, and flow rates) and integrating them into a unified cloud-based platform, this project will enable real-time insights into water system performance. The solution will leverage secure communication protocols (e.g., MQTT with TLS) and adhere to industry frameworks such as the Industrial Internet Consortium’s (IIC) Connectivity Framework and the OGC SensorThings API for sensor data interoperability.
This bounty will deliver a low-power, high-reliability IoT system that collects, transmits, and analyzes water data in real-time. The system will follow established IoT and environmental data standards, ensuring scalability and integration with broader water management initiatives. The resulting platform will be fully documented, from hardware deployment to software integration, providing a blueprint for replication in other regions.
Target Outcomes:
Develop a scalable, machine learning-driven framework that integrates multi-modal Earth observation data, historical meteorological datasets, and real-time oceanographic observations to deliver coastal flood forecasts with a 48-hour lead time. The solution should incorporate international standards, interoperable data formats, and robust validation protocols to ensure reliability and scalability across multiple coastal regions.
Coastal flooding is among the most costly and frequent natural disasters, intensified by climate change and rapid urbanization. Current forecasting methods often lack the precision, granularity, or timeliness required for proactive response measures. To address these limitations, the proposed solution will utilize open data standards such as the OGC (Open Geospatial Consortium) Web Map Service (WMS) and NetCDF conventions, as well as widely recognized hydrodynamic modeling frameworks. By combining advanced machine learning algorithms—trained on historic flood events—with real-time observational data streams, this initiative aims to produce a predictive model that meets the stringent requirements of emergency management and infrastructure protection.
The resulting predictive system will leverage state-of-the-art AI frameworks (e.g., TensorFlow, PyTorch) and follow geospatial data standards (e.g., ISO 19115 for metadata, ISO 19128 for web map services). It will provide coastal cities with a robust decision-support tool for preemptive action, enabling emergency planners to deploy resources more effectively. The implementation will be fully documented with industry-standard practices, including model validation procedures, data source integration workflows, and API specifications for seamless integration with existing disaster management platforms.
Target Outcomes:
The Global Centre for Risk and Innovation (GCRI)
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers).
Despite our very best efforts to allow anybody to adjust the website to their needs. There may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to