Hackathon.lu 2026: a strong year for open cybersecurity collaboration

Hackathon.lu 2026: a strong year for open cybersecurity collaboration
Hackathon.lu 2026, held in Luxembourg on 14–15 April 2026, once again showed what makes this event special: it is not just a place to present ideas, but a place where ideas turn into code, releases, integrations, datasets, pull requests, and concrete roadmaps.
Looking across the Discourse project updates, the overall picture is clear. This year’s edition produced more than thirty concrete project outcome threads, spanning threat intelligence, malware analysis, detection engineering, vulnerability intelligence, graph exploration, forensics, and infrastructure. Some teams shipped releases on the spot. Others used the two days to validate designs, harden code, identify weaknesses, or connect previously separate tools into more useful workflows.
The result is a hackathon that delivered not only new features, but also better interoperability across the open-source cybersecurity ecosystem.
The big picture
Several themes stood out across the projects:
- MISP remained a major center of gravity, with work on AI-assisted workflows, graph exploration, export formats, hunts, user experience, privacy-preserving workflows, and engineering tooling.
- Detection and runtime visibility improved, especially around Kunai, Kubernetes, rootkit detection, and rule handling.
- Vulnerability intelligence workflows became more connected, with improvements around EPSS and forecast such as TARDISight, Tsunami sightings, CPE assignment, and Vulnerability-Lookup integrations.
- Hackathon outcomes were not limited to shiny features: documentation fixes, deployment pain points, code hardening, security assessments, and reproducibility work were all part of the story.
That balance matters. A healthy open-source security ecosystem needs both innovation and maintenance, and Hackathon.lu 2026 delivered both.
MISP saw one of the strongest clusters of outcomes
A large share of the visible momentum this year came from projects around MISP and the broader tooling orbit around it.
One of the most ambitious efforts was AIPITCH, a new round of work on a generic MISP AI module. The team spent the hackathon defining use cases, refining architecture, and producing a second proof-of-concept implementation for combining LLM-based NLP tasks with MISP. Just as importantly, the work emphasized guardrails, testing, metadata, and tagging of AI-assisted output, which suggests a careful and practical approach rather than AI for AI’s sake.
Another major milestone was the release of MISP Engineering Bay v1.0, a collection of browser-based authoring tools designed to make it easier to build and maintain MISP data structures. The first release includes an Object Template Creator and a Galaxy Editor, both aimed at reducing the friction of maintaining MISP’s JSON-driven ecosystem.
MISP Workbench also had a particularly productive hackathon. Reported outcomes included:
- MITRE ATT&CK Pattern hunts
- a new hunts heatmap for coverage visualization
- broader work on TTP/MITRE hunts
- an LLM-assisted query builder
- JA4+ correlations
- continued analyst-focused workflow improvements
Taken together, these updates make Workbench look increasingly like a serious operational layer for large-scale indicator analysis and hunting.
There was also steady progress on MISP workflows themselves. One thread added support for misp-module results inside workflow roaming data and introduced workflow environment variables for ad-hoc workflows. Another used that work to prototype privacy-enhancing workflows, specifically a **Private Set Intersection (PSI)** setup that lets separate MISP instances compare attribute intersections without exposing the underlying sensitive data.
On the user-experience side, an Audio Assistant in MISP explored whether event content and summaries can be delivered through speech, including local-model-backed summarization and configurable plugin settings. In parallel, a separate initiative launched user interviews for CTI practitioners, aiming to collect real-world usage patterns and UX personas to feed future MISP development.
Finally, graph and visualization work was everywhere around the MISP ecosystem. Pivotick became a recurring thread across multiple projects:
- experimental integration into MISP as a replacement for the correlation graph in the Overmind theme
- migration of AIL correlation and relationship graphs to Pivotick
- improvements to Pivotick UI and rendering
- updates to misp-galaxy graph export to support Pivotick static output and better filtering
This recurring use of Pivotick across projects says a lot: visual exploration of CTI relationships is clearly becoming a shared priority.
Kunai work focused on real-world deployment and detection depth
The Kunai project had one of the clearest “from lab to operations” tracks during the event.
A first line of work explored running Kunai in Kubernetes, resulting in a minimal proof-of-concept configuration and a concrete upstream suggestion to make host UUID handling externally configurable. That was then extended by a second project: a Kubernetes enrichment daemon that connects to the local container runtime interface and generates JSON metadata to enrich process context with container and Kubernetes information such as root PID and labels.
This is exactly the kind of hackathon progression that matters: one topic uncovers a limitation, and a second topic turns that finding into an engineering response.
A third Kunai thread focused on detection quality. Work on LinkPro eBPF rootkit analysis confirmed that Kunai already detects most suspicious activity associated with the published samples, and the team assembled a 12 GB dataset of potential eBPF malware samples for further analysis and future detection improvements.
On the build and deployment side, Kunai also benefited from a simplified Dockerfile and reduced container size, with a pre-built container published for easier testing and deployment.
Altogether, the Kunai outcomes show a project maturing across detection, packaging, and cloud-native operations at the same time.
Vulnerability intelligence and asset context got tighter integration
Hackathon.lu 2026 also produced several outcomes that improved the flow between asset identification, vulnerability metadata, and shared observations.
For Vulnerability-Lookup, a new EPSS importer was added to fetch daily EPSS data and store per-CVE metadata for later use. That is a practical step toward making exploit-likelihood context more immediately available in open vulnerability workflows.
A second contribution, TsunamiSight, extracts vulnerability-related observations from Google Tsunami Security Scanner plugins and publishes them as sightings to a Vulnerability-Lookup instance. This is a strong example of the kind of bridge-building that hackathons are ideal for: taking useful signals that already exist elsewhere and feeding them into a broader knowledge ecosystem.
Asset management also saw a useful improvement through the integration of CPE Guesser into Mercator. Users can now search and assign CPE identifiers directly from Mercator’s cartography forms, making the path from component inventory to vulnerability exposure assessment more direct and less error-prone.
Related work on CPE Editor looked at how collaborative CPE editing could evolve within the GCVE context, including questions around UUID allocation, relationships between vendors and products, and metadata structure. This was more foundational than user-facing, but it points toward the longer-term problem of maintaining better shared product metadata in open ecosystems.
Releases, hardening, and maintenance were equally important outcomes
One of the healthiest signs in the Discourse activity is how much work was devoted to maintenance, fixes, review, and validation, not just feature announcements.
BSimVis v0.1.0 was released with an API and web interface for binary similarity analysis, function diffing, tagging, filtering, and visualization. That is a tangible shipping outcome.
The IDPS-ESCAPE / SATRAP-DL / PyFlowintel cluster also reported a productive hackathon, including validation of deployment scenarios, unified configuration work, a prototype management GUI, unit tests, deployment artifact updates, and follow-up changelog entries.
SSLDump work focused on code quality and resilience: testing a proposed patch, starting a fix to neutralize control characters in output, refining OpenSSL 3 compatibility work, and integrating bounds-checking improvements identified during review.
The Sysdiagnose Analysis Framework (SAF) saw issue fixes, a new case management library, and parser-related improvements, while AIL/MISP contribution work surfaced deployment friction, resulting in documentation clarifications and discussion around removing or replacing confusing legacy installer material.
Even more valuable was the explicit vulnerability assessment of DnsLiar. Instead of simply adding features, one thread documented fuzzing, stress testing, logic review, and several concerns: IP leakage behavior, post-forward filtering inefficiency, lack of DNS amplification protections, and risky unwrap usage in Rust. In parallel, the DnsLiar project itself started work on a whitelist mechanism to improve reproducibility across deployments. Together, those two threads show the kind of constructive, security-minded feedback loop that a good hackathon should encourage.
Experimental ideas also moved forward
Not every successful hackathon project ends in a release. Some of the most useful outcomes are prototypes, datasets, design explorations, or proof-of-concept repositories that define the next phase of work.
That was the case for location-based document tagging, where discussion around geolocation terminology and Bloom filters led to work-in-progress code in the fastopic repository.
It was also visible in Rulezet, which explored how detection rules and bundles could be exported into MISP as structured objects and events, and in the Meluxina thread, which documented practical steps and lessons around using HPC infrastructure and batch jobs for model fine-tuning.
The Forensics Training – Bad out of Hell topic similarly focused on a concrete training scenario around hidden data in FAT32 and the behavior of forensic tooling, which is exactly the sort of practitioner knowledge that benefits from collaborative experimentation.
Additional work and research in the area of image correlation was initiated with the team from UCD CCI, especially on how to enable correlation between Lacus, LookyLoo, and AIL. Different algorithms were reviewed and gathered, which may lead to an additional publication soon.
What the 2026 edition tells us
The strongest conclusion from the 2026 project roundup is that Hackathon.lu is operating as an integration engine for open cybersecurity.
The event is not only helping individual tools improve in isolation. It is creating connections between projects:
- MISP with AI modules, workflows, and graph tooling
- Kunai with Kubernetes context and malware analysis
- Mercator with CPE Guesser and Vulnerability-Lookup
- Tsunami plugins with sightings publication
- Privacy-enhancing techniques with operational CTI workflows
Just as importantly, the event keeps making room for the unglamorous but essential work: fixing deployment pain, reducing container size, adding tests, reviewing architecture, documenting issues, and identifying security weaknesses before they become bigger problems.
That is the real outcome of Hackathon.lu 2026. Not a single flagship announcement, but a broad, visible acceleration across a whole ecosystem of free and open-source cybersecurity tools.
And that is probably the best measure of success for a hackathon like this: when the community leaves not just inspired, but with code merged, releases cut, bugs found, workflows connected, and a clearer map of what to build next.