Listening to Logs

Client: Internal - XBG Solutions

2024-12-04

What if we monitored system health through sound instead of screens? We built a proof-of-concept that turns log data into brown noise and musical patterns—and discovered why astronomy's accessibility research might revolutionise how we monitor complex systems.

Listening to Logs

We were listening to a podcast about Wanda Díaz-Merced, a Puerto Rican astronomer who lost her sight and revolutionised how we analyse astronomical data. When traditional visual methods became impossible for her, she developed sonification techniques that convert telescope data into sound. Through her work using sound it was discovered that star formation can affect supernova explosions, suggesting that supernova explosions are not only dependent on the mass of the host star. This was something that was near impossible to see in visual plots, and was only discovered due to a drop in volume associated with the conversion of the data in the sonification process.

That got me thinking: if sound can reveal patterns in cosmic data that visual analysis misses, what about system logs?

The Pattern Recognition Problem

You know the scenario. Multiple monitors displaying dashboards. Log streams scrolling faster than anyone can read. Alert fatigue from systems crying wolf. Your ops team switching between Grafana, Datadog, and whatever other monitoring tools you’re paying for, trying to spot patterns that indicate real problems versus normal operational noise.

The fundamental issue is that visual monitoring demands active attention. You have to look at the screen to know what’s happening. Your brain can only focus on one dashboard at a time, and by the time you notice something’s wrong, it’s often too late to prevent the cascade failure.

But what if your system’s health was something you could feel in the background? What if unusual behaviour stood out not because you were actively watching for it, but because it broke the expected audio pattern you’d unconsciously learned?

Why Sound Makes Sense

The experiments revealed that sonification of the data improved astronomers’ ability to detect the subtle signals indicating the presence of a black hole… sound enhances the ability to access very weak signals that are by nature invisible to the human eye in astronomy data. The human auditory system is remarkably good at pattern recognition, especially for temporal sequences and anomaly detection.

Think about how you recognise your car’s engine health by sound. You don’t need to watch the dashboard—you hear when something’s wrong. A different pitch, an irregular rhythm, an unexpected noise, and you know there’s a problem before any gauge moves.

The results of this study show that participants perform significantly better in Psychomotor Speed Test, Continuous Performance Test, Executive Function Test and Working Memory Test when exposed to coloured noise. Brown noise blocks these disruptions by providing a consistent, low-frequency sound that helps your brain focus on what’s in front of you whilst remaining alert to changes.

That’s the insight: brown noise as a baseline with musical patterns for discrete events.

Building the Audio Logs

We spent a weekend building a proof-of-concept using Tone.js to see if log sonification could actually work. Not just as an accessibility tool, but as a more intuitive way to monitor system health.

Here’s the basic mapping we developed:

// Brown noise baseline represents overall system health
baseNoise = new Tone.Noise("brown").start();
baseFilter = new Tone.Filter(600, "lowpass").toDestination();
baseNoise.connect(baseFilter);
baseNoise.volume.value = -18; // Subtle background presence

// Event channel with reverb for discrete events
eventReverb = new Tone.Reverb({
  decay: 0.8,
  wet: 0.3
}).toDestination();

eventFilter = new Tone.Filter(1000, "bandpass");
eventFilter.connect(eventReverb);

// Map system health to filter characteristics
function updateSystemBaseline(errorRate, avgLatency) {
  // Higher error rate = lower, muddier filter
  const targetFreq = 600 - (errorRate * 8);
  
  // Higher latency = more resonance (system struggling)
  const targetQ = 1 + (avgLatency / 200);
  
  baseFilter.frequency.rampTo(Math.max(200, targetFreq), 0.5);
  baseFilter.Q.rampTo(Math.min(10, targetQ), 0.5);
}

Each event type gets a distinct musical signature:

  • Normal requests: Gentle upward filter sweep with a soft click
  • Warnings: Sharp resonance spike with grainy texture
  • Errors: Harsh clicks with volume swell and resonant degradation
  • Job completions: Ascending arpeggio (C-E-G-C) with sparkle
  • Auth failures: Multiple rapid harsh clicks descending in pitch
  • System cascades: Progressive resonant degradation that genuinely sounds like something failing

The brown noise baseline modulates based on aggregate metrics. High error rates make the filter muddier and lower. High latency adds resonance that makes the system sound like it’s struggling. High load increases the volume slightly.

What We Discovered

After running our proof-of-concept with simulated log data, a few things became clear:

Pattern recognition is immediate. Within minutes, you start recognising the audio signature of different system states. A healthy system has a particular sound profile. Degradation has its own audio fingerprint that’s unmistakable once you’ve heard it.

Peripheral awareness works. You can focus on coding or debugging while the audio runs in the background. Your brain filters out the familiar baseline but immediately notices when the pattern changes. It’s like having perfect peripheral vision for your entire infrastructure.

Alert fatigue disappears. Instead of hundreds of visual alerts competing for attention, you get a continuous soundscape that tells a story. A single auth failure sounds different from a coordinated attack. A gradual performance degradation sounds different from a cascade failure.

The timing matters. Visual dashboards show you what happened. Audio lets you hear it happening. You pick up on the early stages of problems—the slight change in pitch that indicates latency is climbing, the subtle texture shift that suggests error rates are increasing.

The Cross-Domain Insight

This is exactly the kind of pattern we see repeatedly: accessibility research creates better tools for everyone. Wanda’s sonification techniques weren’t just about making astronomy accessible to blind scientists—even for sighted astronomers under professional working conditions, sound enhances the ability to access very weak signals that are by nature invisible to the human eye.

Curb cuts were designed for wheelchairs but help everyone with luggage, prams, or mobility challenges. Voice interfaces started as accessibility features and became Siri and Alexa. Closed captions help people in noisy environments, non-native speakers, and anyone watching video without sound.

Log sonification follows the same pattern. What starts as “how do we monitor systems without looking at screens” becomes “why are we limiting ourselves to visual monitoring when audio reveals patterns we’re missing?”

The Workplace Reality

Here’s where it gets interesting: colour noise may be used in the future to improve a workplace sound field environment, so that white-collar workers have a better working environment. Research shows that employees who use brown noise in their work environments often report better concentration and fewer distractions.

Most offices are either silent (with everyone wearing headphones) or filled with distracting conversations and keyboard noise. Brown noise baselines with system monitoring could actually improve focus whilst providing continuous infrastructure awareness.

You’re working on feature development, but you hear the subtle shift that indicates the payment service is struggling. You notice the pattern that suggests someone’s trying to brute-force authentication. You catch the early signs of a database connection pool exhaustion—all without taking your attention away from the code you’re writing.

Technical Implementation

A production system would need to solve several challenges we glossed over in our proof-of-concept:

Log parsing and classification. You need real-time analysis to map log events to audio parameters. Error patterns, user identification, service health metrics, latency distributions—all feeding the audio engine.

Event prioritisation. Not every log line deserves audio feedback. You need intelligent filtering to focus on events that indicate system health changes rather than normal operational noise.

User customisation. Different roles care about different patterns. A database administrator needs different audio cues than a frontend developer. The sonification should adapt to what matters for your specific responsibilities.

Volume and timing management. High-traffic systems generate thousands of events per second. The audio mapping needs to aggregate and represent patterns rather than trying to sonify every individual event.

Here’s a rough implementation approach:

// Log event classification
function classifyLogEvent(logEntry) {
  const patterns = {
    'error': /ERROR|FATAL|Exception/i,
    'warning': /WARN|WARNING/i,
    'auth_failure': /authentication.*failed|invalid.*credentials/i,
    'slow_query': /slow.*query|timeout/i,
    'job_complete': /job.*completed|task.*finished/i
  };
  
  for (const [type, pattern] of Object.entries(patterns)) {
    if (pattern.test(logEntry.message)) {
      return { ...logEntry, eventType: type };
    }
  }
  
  return { ...logEntry, eventType: 'info' };
}

// Aggregate metrics for baseline modulation
function updateMetrics(events) {
  const recent = events.filter(e => 
    e.timestamp > Date.now() - 60000 // Last minute
  );
  
  const errorRate = recent.filter(e => e.level === 'error').length / recent.length;
  const avgLatency = recent
    .filter(e => e.responseTime)
    .reduce((sum, e, _, arr) => sum + e.responseTime / arr.length, 0);
  
  updateSystemBaseline(errorRate, avgLatency);
}

// Real-time log stream processing
function processLogStream(logStream) {
  logStream
    .map(classifyLogEvent)
    .forEach(event => {
      sonifyEvent(event);
      updateMetrics(eventBuffer);
    });
}

What We’re Still Figuring Out

Our proof-of-concept raises more questions than it answers:

Scalability. Does this approach work for systems generating millions of events per hour? How do you maintain meaningful audio patterns when dealing with massive distributed architectures?

Learning curve. How long does it take teams to become proficient at audio pattern recognition? Is there a universal audio language for system health, or does each team need to develop their own?

Integration complexity. Most organisations have log aggregation through ELK, Splunk, or similar tools. How does sonification integrate with existing monitoring infrastructure without requiring a complete overhaul?

Cultural acceptance. Will teams actually adopt background audio monitoring, or will it be seen as distracting noise? The research suggests brown noise improves focus, but that’s different from having your infrastructure talking to you all day.

The Broader Question

This experiment isn’t really about log sonification. It’s about how we approach monitoring and pattern recognition in complex systems.

We default to visual interfaces because that’s what we’ve always done. But visual monitoring has fundamental limitations: it requires active attention, it can only show one perspective at a time, and it’s terrible at helping us notice gradual changes or subtle patterns.

Audio monitoring isn’t a replacement for visual dashboards—it’s complementary. You still need graphs and alerts for diagnosis and troubleshooting. But for ongoing situational awareness and early problem detection, sound offers advantages that screens can’t match.

The real insight is about peripheral awareness. Your visual attention is a limited resource that you need for actual work. But your auditory system can process background patterns while you focus elsewhere, alerting you to changes that deserve investigation.

If You’re Interested

We’ve got a working proof-of-concept that demonstrates the basic concepts. If you’re curious about implementing something similar in your environment, or you want to explore how sonification might work with your specific monitoring challenges, get in touch.

This feels like one of those ideas that’s either genuinely useful or completely impractical—and the only way to find out is to try it with real systems and real teams. If you’ve got interesting log data and want to experiment with audio monitoring, we’d love to collaborate on some implementation research.

The best solutions often come from unexpected places. Sometimes you need an astronomer studying black holes to show you a better way to monitor your databases.

Want to discuss a similar challenge?

We're always up for a chat about systems, automation, and pragmatic solutions.

Get in Touch