In partnership with

Dear Sentinels

Well, now, it’s exciting to bring you this special edition of the newsletter! In this week's episode, we will returning into the Flipper Zero which we introduced last week. You don’t have to read last week's edition unless you really want to know what the Flipper Zero is. In this week's edition, we will install the Flipper Zero first software and go over the ethical hacking capabilities it offers.

As you may have noticed, we are “Clawing” deep into the autonomous agency of Clawdbot, now Moltbot, with a full investigative article and after that, we turn our attention to an academic article of Study of AI-Generated Build Code Quality.  But first, we have to pay the rent and give you news from around the Internet.

Free email without sacrificing your privacy

Gmail is free, but you pay with your data. Proton Mail is different.

We don’t scan your messages. We don’t sell your behavior. We don’t follow you across the internet.

Proton Mail gives you full-featured, private email without surveillance or creepy profiling. It’s email that respects your time, your attention, and your boundaries.

Email doesn’t have to cost your privacy.

News from around the web

Starting of with the Flipper Zero!

Frist you want to head on over to the site https://flipper.net/pages/downloads and click on the operating system that you are on.

Then install it and launch the Filler Zero app, it will look like this until you plug in the Flipper.

Then plug in a high-quality microSD card and a USB-C cable capable of data transfer (if your unsure, just plug the cable that came with the Flipper into it). Plug in the Flipper Zero and update. In the next episode well hack something ethically, which means that we have the right to hack it and to post it on here.

A Comprehensive Guide to Moltbot (formerly Clawdbot)

Big changes are happening in the world of AI. We’re moving from the old days of chatbots that just wait for you to say something, to new systems that actually take the initiative. Right at the front of this shift is Moltbot. Think of it as an on-device digital assistant that doesn’t just sit around waiting for instructions. Instead, it connects directly to apps like Signal, Telegram, and WhatsApp and can actually run shell commands or browse the web without needing to be in the cloud. Thanks to its open-source nature and a whole bunch of skills and plug-ins, Moltbot isn’t just a fancy search tool. It’s more like a digital sidekick that keeps working even when you’re offline.

Here’s where things get interesting. Moltbot doesn’t just wait for you to ask for help. It actually starts your day with a briefing, pulling together your tasks from apps like Things 3 and giving you a game plan. While you’re making coffee, Moltbot is already checking the news with the Brave search API and lining up what you need to know. As the day goes on, Moltbot keeps an eye on your work and even builds tools you might need—without you having to ask. To keep things running smoothly (and not burn through expensive tokens), it uses Codec CLI for the heavy lifting. The result? Moltbot can whip up a project management system or a custom document viewer and send it off for review, all while you focus on other things.

While Moltbot is busy building, it’s also keeping a running log of everything in a sort of "Second Brain", basically a Next.js site full of markdown notes and guides. So, all your work and ideas get saved automatically, almost like having a digital twin of your brain. In the afternoons, or any time you select, Moltbot switches gears and pulls together a research report, scanning X and Reddit for the latest trends using Grok and OpenAI. Whether it’s machine learning or workflow hacks, you’ll always be up to speed. This constant loop of research, code, document, repeat, makes you way more efficient.

Moltbot actually started out as Clawdbot, but had to change its name after Anthropic complained it sounded too much like their "Claude" models. Not exactly a fun rebrand, there are still 177 places in the code where the old name pops up, which can really trip up anyone trying to use it or connect to other systems. It’s a good reminder that in the fast-moving world of AI, things can change overnight, and not always for the better.

Now, here’s the catch: Moltbot is super powerful, but that also makes it risky if you’re not careful. Because it has so much access, a simple mistake can let someone else take control. For example, there’s no real separation between what the AI reads and what it’s supposed to do, so a sneaky email could trick Moltbot into running commands you never wanted, like grabbing your files or blasting music on Spotify. Not ideal.

It gets worse. Moltbot stores sensitive stuff like API keys and OAuth credentials in plain text, right on your disk. Even Signal pairing codes are just sitting there, which means someone could hijack your Signal account if they get in. And it’s not just about what’s on your computer, there are fake websites and repos out there trying to trick you, and if you’re running Moltbot on a VPS, it can even show up to anyone scanning the network. So, you really have to be on your toes.

So, what’s the best way to stay safe? Treat Moltbot like a wild animal: keep it in a sandbox, on its own hardware, or in a locked-down virtual machine if you want to experiment with it. Don’t let it anywhere near your main files. If however, you want to use it, don’t put it on the open Internet, use a separate VLAN and only connect through secure tunnels like Netbird or SSH. Moltbot is amazing for pushing the boundaries of what AI can do, but you have to be strict about security if you want to avoid nasty surprises.

Background

The rapid integration of Large Language Models and AI coding agents into software development has prompted significant research into the quality and correctness of AI-generated source code. However, the impact of these agents on build systems, the critical components responsible for dependency management, compilation, and packaging, remains largely unexplored in contemporary empirical software engineering literature. Modern software development relies heavily on automation tools like Maven, Gradle, and CMake, which use complex configuration files that are prone to design flaws known as code smells.

This research fills the existing gap by conducting the first large-scale empirical study to assess whether AI agents introduce or mitigate technical debt in build scripts. The study utilises the "Sniffer" static analysis tool, which is specifically designed to detect maintainability and security-related issues in build code, such as hardcoded credentials or outdated dependencies. By analysing the AIDev dataset, which contains nearly one million agentic pull requests, the authors aim to provide a systematic evaluation of AI-authored build system code.

Use-case

The primary use case for this research is to inform the development of "AI-aware" quality assessment tools and governance frameworks for automated software engineering. By identifying specific behavioural patterns in agents—such as Copilot having a higher smell-introduction rate compared to Codex—organisations can implement stronger guardrails and integrated quality checks. This allows DevOps teams to integrate AI agents into their build workflows more safely by understanding which specific maintainability risks, like "Wildcard Usage" or "Lack of Error Handling," require the most human oversight.

Introduced Smells

Additionally, the authors' open-source dataset and replication package serve as a foundational resource for the research community. Developers can use the identified refactoring patterns, such as "Externalise Properties," to guide AI agents toward producing higher-quality, more portable build configurations. This ultimately supports the "SE 3.0" paradigm where AI teammates and human developers collaborate to maintain complex, evolving software infrastructures with reduced technical debt.

"This dual impact underscores the need for future research on AI-aware build code quality assessment to systematically evaluate, guide, and govern AI-generated build systems code."

Conclusion

The authors conclude that AI agents are already a significant part of modern software engineering, showing a capacity to both improve and degrade build script quality. Future research will focus on developing "smell-aware" and "refactoring-guided" AI agent behaviours to ensure that automated modifications do not accumulate technical debt. Furthermore, the team plans to expand their analysis to a broader range of build systems and to integrate automated quality checks directly into the agentic workflow.

The article can be found here.

Keep Reading