this should not be possible

this should not be possible

It might surprise some folks, but I'm incredibly cynical when it comes to AI and what is possible; yet I keep an open mind. That said, two weeks ago, when I was in SFO, I discovered another thing that should not be possible. Every time I find out something that works, which should not be possible, it pushes me further and further, making me think that we are already in post-AGI territory.

I was sitting next to a mate at a pub; it was pretty late, and we were just talking about LLM capabilities, riffing about what the modern version of Falco or any of these tools in the DFIR space looks like when combined with an LLM.

You see, a couple of months ago, I'd been playing with eBPF and LLMs and discovered that LLMs do eBPF unusually well. So in the spirit of deliberate practice (see below), a laptop was brought out, and we SSH'd into a Linux machine.

deliberate intentional practice
Something I’ve been wondering about for a really long time is, essentially, why do people say AI doesn’t work for them? What do they mean when they say that? From which identity are they coming from? Are they coming from the perspective of an engineer with a job title and

The idea was simple.

Could we convert an eBPF trace to a fully functional application via Ralph Wiggum? So we started with a toy.