Quests are guided micro-tasks that enable individuals and institutions to enter the Nexus Ecosystem by contributing to real-world disaster risk challenges. Designed to align with Responsible Research and Innovation (RRI) principles, Quests support foundational participation in areas such as geospatial annotation, policy translation, AI onboarding, and parametric trigger testing. Each completed Quest builds contributor credentials through eCredits and feeds into the broader Nexus micro-production workflow. Quests are essential for democratizing access to DRR, DRF, and DRI innovation—activating diverse expertise while reinforcing transparency, inclusion, and ethical engagement at the system’s edge
Quests are structured, modular tasks that introduce or guide participants through concrete challenges—ranging from satellite data annotation to parametric finance calibrations. They break down large projects into manageable segments, letting contributors learn in real time while delivering meaningful outputs that feed into open-source DRR, DRF, and DRI solutions
Anyone with a relevant interest—be it technical (e.g., data analysts, AI engineers) or domain-focused (e.g., climate scientists, municipal planners, local field responders)—can join a Quest. The system is built to accommodate multi-level expertise, from novices who want to earn eCredits for simpler tasks to advanced researchers working toward pCredits or vCredits
Each Quest is aligned with a real-world risk application:
DRR Quests might involve hazard mapping or early warning improvement;
DRF Quests focus on parametric financing and insurance triggers;
DRI Quests tackle data analytics, AI modeling, or geospatial intelligence.
By completing relevant tasks, participants actively strengthen these critical domains with open data layers, validated models, or improved operational procedures.
The MPM breaks down large, complex risk innovation goals into a series of small, trackable work units (Quests). This approach not only provides quick wins and iterative improvements but also fosters continuous peer-driven engagement. Participants can seamlessly progress from initial engagement tasks to more advanced modules—earning eCredits, pCredits, and eventually vCredits as their contributions grow in depth and significance
Each Quest is structured to incorporate disclaimers about data integrity, cultural and local sovereignty, equity of coverage, and potential algorithmic biases. In practice, this means including disclaimers about incomplete data or uncertain thresholds, verifying local input or ground truth, and systematically inviting peer review to eliminate hidden biases. Thus, Quests ensure that all innovation remains transparent, inclusive, and ethically aligned
It depends on the technical tier of a Quest. Some Quests may require basic familiarity with data labeling or simple script usage, while advanced Quests can involve specialized geospatial platforms, AI frameworks, or parametric simulation scripts. Each Quest lists recommended software, skill sets, and knowledge references (e.g., mapping tools, data cleaning scripts) to help you prepare effectively
Completion of a Quest yields a combination of eCredits, pCredits, or partial vCredits, reflecting different levels of engagement, participation, and validation. These credits help track your progression, unlock advanced resources or Bounties, and grant you more influence (e.g., proposal rights, governance roles) in the broader ecosystem. Additionally, completing Quests often builds tangible portfolio pieces (e.g., validated data layers or parametric modules) recognized across the Nexus Platforms
Most Quests have built-in peer collaboration steps. For instance, hazard mapping Quests might require multiple experts to verify polygon accuracy; parametric finance Quests might need local feedback on threshold fairness. Once the peer (or small group) endorses your submission, the system logs partial or full validation credits and awards vCredits to validators. This peer-driven environment ensures multiple sets of eyes catch mistakes or biases early
Yes. Many Quests are iterative, meaning new data sources or revised thresholds can trigger updated Quests. For instance, hazard zones might need reannotation after major environmental changes (landslides, deforestation). Similarly, parametric instruments may be re-benchmarked if climate data or coverage demands shift. This ensures the open, evolving nature of DRR, DRF, and DRI solutions remains current with real-world shifts
Participants, institutions, or local communities of GRA can submit Quest proposals through official channels (platform committees or designated working groups). Proposals must outline the real-world impact, identify the skill sets involved, and demonstrate how the tasks connect to DRR, DRF, or DRI objectives under RRI guidelines. Following a short approval cycle, your Quest can be published, enabling others to join, collaborate, and enrich the NE.
Multiple parametric instruments for flood coverage exist, each employing unique triggers, payout speed, or coverage biases. This Quest demands a data-driven benchmark to assess how each solution performs against real flood event sets, cost feasibility, and distribution equity. Contributors do not rely on HPC references here but employ advanced data analytics or containerized batch scripts to systematically evaluate event matching.
In modern DRR or DRI systems, real-time sensors (rainfall gauges, seismographs, tide monitors) produce continuous data streams that must be quickly triaged to ensure correct interpretation. This Quest is about systematically identifying and filtering anomalies, dropouts, or sensor drifts within these real-time feeds. By employing robust data auditing methods, you preserve situational awareness for early warning dashboards and parametric triggers, especially in high-frequency hazards like flash floods or tsunamis.
A robust architecture for triage might combine containerized microservices for ingestion, rule-based anomaly detection (like Rolling Median or DBSCAN clustering), and distributed logs for collaborative peer review. The RRI lens ensures that sensor coverage or granularity does not discriminate against remote or under-instrumented areas, establishing disclaimers where data confidence is low.
Parametric finance instruments enable rapid payouts by activating coverage upon crossing specific climate or environmental thresholds. Ensuring that these triggers are carefully validated can protect smallholder farmers from undue risk burdens or coverage shortfalls. This Quest requires analyzing multiple sets of meteorological data—potentially from regionally distributed weather stations or remote sensing—to confirm that triggers reflect actual phenomena like extreme drought onset, temperature spikes, or consecutive rainfall deficits.
Technically, the design might incorporate time-series correlation checks, segmentation algorithms for multi-annual climate cycles, and local feedback loops for equity. By merging domain insights (e.g., agronomic thresholds for crop stress) with robust data analytics, participants can highlight potential biases (like ignoring microclimates or historically under-recorded zones). RRI compliance ensures disclaimers around data-limited intervals or uncertain station calibrations. The final parametric logic merges seamlessly into modular parametric finance frameworks that can be easily updated or repurposed for multi-risk coverage beyond agriculture (e.g., livestock, fisheries).
A stable data pipeline underpins advanced analytics, from multi-regional flood forecasting to parametric pay-out triggers. This Quest calls for systematically auditing a pipeline segment, focusing on naming integrity, duplication, or undocumented fields. Contributors also produce disclaimers for data sets that might be restricted, incomplete, or subject to local privacy laws.
Highly sophisticated risk simulations can remain underutilized if they do not align with first-responder workflows. This Quest tasks contributors with collecting structured feedback from on-the-ground responders about scenario clarity, timing of alerts, or user-interface complexities. The resultant improvements feed into robust scenario design that marries advanced analytics with real response rhythms.
Cross-border policies aim to unify resource sharing, data exchange, and parametric logic for multi-nation disaster scenarios. This Quest involves testing digitized or textual clauses (potentially with integrated triggers) against sample or historical cross-border hazard data, ensuring they remain operationally and ethically aligned. Instead of HPC usage, containerized scenario scripts or parallel analyses can suffice for data correlation.
This Quest centers on collecting and annotating Earth Observation imagery to pinpoint hazard-prone geospatial features such as floodplains, landslide corridors, urban heat islands, or coastal erosion zones. The objective is twofold: (1) gather fine-grained hazard intelligence essential to local risk planning, and (2) develop an open geospatial resource for subsequent modeling, forecast validation, and policy alignment. By systematically tagging and verifying these features, contributors foster a baseline reference that helps unify multi-layer data (demographics, historical incidents) into meaningful risk-lens analytics. The focus on robust design includes adopting standardized naming conventions (e.g., OGC-compliant metadata), employing spatiotemporal indexing, and using geostatistical checks to minimize error margins. This ensures that newly tagged data remains scalable for advanced correlation (e.g., displacement triggers, parametric thresholds) and ethically grounded under responsible research and innovation (RRI) guidelines.
On an expert-architecture level, the hazard mapping pipeline might utilize containerized microservices to process input imagery, store annotations in version-controlled layers (e.g., Cloud Optimized GeoTIFF or Zarr format), and produce a unified shapefile or vector layer for open risk knowledge libraries. By collaborating with local domain experts (coastal engineers, climatologists), participants ensure the mapping remains inclusive of ephemeral hazards or unregistered localities often ignored by standard risk classification. This Quest thus merges geospatial analytics with RRI-based disclaimers on data resolution, local sovereignty, and possible uncertainties.
This Quest merges a short AI/ML code sprint with an advanced Work-Integrated Learning Path (WILP). Participants implement or enhance an AI/ML pipeline targeting a recognized DRR or DRI issue—such as climate-driven vector-borne disease spikes, population displacement predictions, or multi-hazard synergy forecasting. The emphasis is on building an ethically grounded solution with disclaimers around interpretability, data constraints, and local context.
Parametric finance or micro-insurance guides, often written in specialized industry jargon, can limit community uptake if not localized or translated. This Quest merges translation expertise with domain knowledge, ensuring that nuances of parametric triggers, coverage indices, and disclaimers remain accurate and culturally relevant. The “robust design” aspect includes embedding local analogies or references (e.g., seasonal names, local finance terms) without losing the technical precision.
Predicting landslides requires data-driven modeling of geological, climatological, and topographical signals. This Quest entails an in-depth responsible and performance audit of an existing landslide AI pipeline—checking coverage for atypical slope profiles, interpretability tools, and disclaimers for data-limited or indigenous terrains. The design fosters a robust approach, from verifying the model’s geostatistical assumptions to ensuring locally relevant disclaimers.
Experts may incorporate advanced interpretability frameworks (like post-hoc Saliency Maps or Grad-CAM), multi-ensemble verification (where multiple models converge on a single risk classification), or spatiotemporal indexing. This merges domain-based geotechnical knowledge with a thorough RRI approach that acknowledges rural or indigenous constraints.
The Global Centre for Risk and Innovation (GCRI)
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers).
Despite our very best efforts to allow anybody to adjust the website to their needs. There may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to