31. Mar 2026

AI-Native: Lessons Learned from Accso's Internal AI Hackathons

With our AI-native approach, we at Accso integrate AI into the daily work of every team member. For example, through hackathons where we experiment with AI in software development—ranging from AI-assisted coding to agentic coding. Cheryl Yamoah, Domenic Pawlitte, and Daniel Blümel have summarized the experiences and insights we’ve gained so far.
1060 x 710 Blümel Daniel

Author

Daniel Blümel

1060 x 710 Yamoah Cheryl

Author

Cheryl Josephine Yamoah

1060 x 710 Pawlitte Dominic

Author

Domenic Pawlitte

AI Hackathons at Accso

For Accso, “ AI Native” means that AI is the natural foundation of our work – in client solutions, in (AI-supported) projects, and in internal processes through meaningful automation.

To achieve this vision, we have been holding hackathons in our various teams since July 2025. Hackathons are a complementary format to training and testing in everyday work, empowering employees to experiment on their own, gain experience, and find inspiration for their own work context.  

Expectation vs. Reality: “Faster, but with more effort” is not a contradiction

The majority of our teams have already held their hackathons, and the results from four of them have been analyzed in detail. To this end, participants were surveyed in advance about their expectations and afterward about their experiences.  

A pattern emerged even in the initial surveys: Many expected AI to speed up work, but at the same time, they anticipated more iterations, more discussions with the AI, and more loops. That sounds paradoxical, but it isn’t: “Faster” here often means “faster to an initial version,” not “faster to a robust solution.”  

Over time, something else interesting happened: expectations regarding agentic coding dropped significantly in a later hackathon – ƒwhile the expectation that the effort would remain high stayed roughly the same. Why exactly is that? Our guess: the hype is cooling off, and experience makes people more realistic.  

Output of our AI hackathons: from greenfield to testing

The goal of the hackathons is to apply AI within the participants’ work context. Accordingly, various classes of results emerged that map the artifacts along the software engineering process:

  1. Runnable Greenfield demos were the most common type: new applications “from scratch,” quickly demonstrable – a clear strength for demos and prototypes.  
  2. Brownfield extensions (extending the existing codebase) were also present. Here, the existing context brings its own kind of complexity: project-specific dependencies, internal libraries, or implicit architectural rules are difficult for agents to “guess” and must therefore be explicitly communicated. At the same time, the existing codebase also provides guidance: existing patterns and conventions can serve as a reference for the agent. It is crucial to provide this context in a targeted manner, as the agent otherwise runs the risk of generating solutions that are functionally correct but architecturally inappropriate. 
  3. High-quality artifacts are possible and can reach a high standard: For example, in a hackathon, functional accessibility tests were generated that check automatable criteria.  
  4. Documentation and planning (architecture diagrams, BPMN, reports, presentations) also emerged as usable output – in other words, “more than just code.”  
2026 Marc Hackathon
2026 Nadja Mery Hackathon
2026 Spiros Hackathon

Specific Recommendations for Working with AI: Better Results Through Better Conditions

The results of our surveys are clear: The technology is powerful, but how you use it makes all the difference. Here are the most important tips from our teams:

  1. Good preparation: The quality of the output depends on the quality of the input. A clear scope, consistent information, and onboarding aids improve results. 
  2. Define clear quality criteria: What does “done” mean? What is “presentation-ready”? 
  3. Choose tools wisely: For complex tasks, structured frameworks provide a solid foundation; for quick prototypes, a lean, prompt-based approach is often sufficient. The key is to use both appropriately for the situation. 
  4. Make iterations and Clarify phases standard so the AI has to guess less.  
  5. Don’t rely on AI for rollbacks: Use Git as a reliable backup. 
  6. Schedule time for quality assurance: The effort shifts away from coding toward review, debugging, and oversight.
  7. Strengthen reviews as a core competency: AI does not replace expertise. AI enhances existing knowledge, but without understanding, risk and review effort increase.  


Our Conclusion

The goal of our internal AI hackathons is to give all team members – whether they have a technical background or not – space to experiment, exchange ideas, and learn. And that is exactly what we achieve. Because the greatest benefit of the hackathons is not a single demo, but a more realistic picture of how AI-supported development works within a team: as a combination of specifications, guidelines, iterative dialogue, a culture of review, and engineering fundamentals.

AI can take a lot of work off our hands – but it doesn’t take away responsibility. That’s why the next step isn’t to make everything agent-based. AI agents deliver their added value where scope and controllability are right – while AI-powered support is already the reliable daily driver in teams’ day-to-day work.