Implicit Temporal Coupling in Asynchronous Systems

Introduction: A Race Against… Yourself?

You’ve just rolled out a fancy asynchronous service that purrs like a well-tuned engine. Except it occasionally goes “pop.” Not a life-threatening pop, no giant performance catastrophes or memory leaks, but the kind of glitch that leaves everyone scratching their heads, thinking: “I’m sure that wasn’t happening yesterday.” If you’ve been there, congratulations: you’ve stumbled upon the wonderfully frustrating world of implicit temporal coupling when the correctness of your system secretly depends on the timing or order of events that are (allegedly) unrelated.

In synchronous code, everything marches along in single-file order. When you read a variable, you’re pretty sure it’s up to date (or at least consistent with the line above). But in an asynchronous system, multiple processes or coroutines run concurrently. They may rely on shared state. They may listen for events from different sources. Suddenly, two (or more) bits of code are scheduled in an unexpected order or a process you assumed would finish first actually ends last.

This can lead to scenarios like:

  • Phantom Dependencies: A background job that processes data only after a certain log statement (somewhere else) finishes writing to a file.
  • Fickle Event Sequence: A consumer that expects a “setup event” and then a “teardown event,” but occasionally gets them in reverse order because the teardown fired earlier than usual.
  • Random Test Failures: “But it works on my machine” becomes the motto because in your local environment, you can’t replicate the race condition that occurs once a month in production.

All these are signs of implicit temporal coupling – dependence on the time or sequence of events that remain hidden until everything breaks, at 3 a.m., on a Sunday


The Solution: Make Dependencies Explicit

We need to shift from magically reliant on order/timing to explicitly orchestrated flows. The idea: let’s encode the sequence or gating logic into the system in a clear, testable way.

In Kotlin, we have some excellent concurrency tools like coroutines, channels, and flows that help make asynchrony more structured. Here’s a simplified example using coroutines and channels to illustrate an explicit “orchestration pipeline” approach:

import kotlinx.coroutines.*
import kotlinx.coroutines.channels.Channel

data class Event(val name: String, val payload: String)

// Our pipeline stages
suspend fun stageOne(input: Channel<Event>, output: Channel<Event>) {
for (event in input) {
// Some domain logic
println("StageOne processing: ${event.name}")
// Pass along
output.send(event.copy(name = "StageOne->${event.name}"))
}
}

suspend fun stageTwo(input: Channel<Event>, output: Channel<Event>) {
for (event in input) {
// Another domain logic step
println("StageTwo processing: ${event.name}")
output.send(event.copy(name = "StageTwo->${event.name}"))
}
}

// In real life, you'd likely return Flow or something more sophisticated
fun main() = runBlocking {
val channel1 = Channel<Event>()
val channel2 = Channel<Event>()
val channel3 = Channel<Event>()

// Launch pipeline
launch { stageOne(channel1, channel2) }
launch { stageTwo(channel2, channel3) }

// Send some events (this is your "producer")
launch {
repeat(5) { i ->
channel1.send(Event("Event$i", "Payload$i"))
}
channel1.close()
}

// Collect final results
launch {
for (event in channel3) {
println("Final Output: $event")
}
}
}

Instead of a random potpourri of asynchronous calls, we’ve defined a clear pipeline. The data flows from channel1 to channel2 to channel3, so if StageTwo happens before StageOne finishes, your code simply won’t compile (kidding—but it won’t get fed the unprocessed event). You’ve declared an explicit order. No more guesswork about who depends on whom.


Why This Is Elegant (a.k.a. “No More 3 a.m. Pager Alerts”)

  1. Clarity of Flow: Each stage does one thing. You don’t rely on the ephemeral “I hope StageOne finished first” logic.
  2. Testability: You can test each stage in isolation. Fake channels can feed in mock events, letting you replicate exact sequences (and out-of-order fiascos) in tests.
  3. Fail Fast & Visibly: If something breaks, you’ll see it in a well-defined stage, instead of rummaging around logs asking “Who wrote first? Did event #7 finish before event #2 kicked off?”
  4. Maintainability: That poor new hire who just joined the team (or your future self) can read the code and see the flow. Less tribal knowledge, more explicit design.

The Timestamp’s Not Always Right

Temporal coupling sneaks up on you when your system’s correctness is pinned on timing or sequence details that are never clearly stated. The cure? Turn implicit assumptions into explicit data flows or orchestrations. Yes, it might mean adding a bit more boilerplate code (like channels or flows), but trust me – once you’ve lived through the 3 a.m. meltdown, you’ll be more than happy to add those extra lines.

A Real-World Tale: The Night of the Dangling Event Handler

In one of my previous private projects (names changed to protect the guilty—er, the innocent), I faced a nasty concurrency gremlin. The system had to coordinate between multiple data processing modules. Each module generated events, and downstream processes had to react properly. But occasionally, we’d get a partial update because one module “raced ahead” of another. We had tests passing in isolation, but in production? We got random inconsistencies that made debugging feel like chasing ghosts.

Scene: The “DataPreparationService” Goes Rogue

Let’s call it DataPreparationService—responsible for enriching data with metadata. Another service, DataPublisher, took that enriched data and sent it to an external API. The glitch? DataPreparationService sometimes finished after DataPublisher had already tried to ship the data, leading to incomplete payloads. Logs looked like:

[2025-04-06 03:03:04.123] DataPublisher - Sending data: FooDocument(id=42, metadata=null)
[2025-04-06 03:03:05.456] DataPreparationService - Enriched data: FooDocument(id=42, metadata=Bar)

That’s a one-second difference. But it might as well have been an eternity if your external system expects metadata=Bar.

Why We Couldn’t Just “Wait a Little Longer”

Sure, we could have put a hard-coded delay in DataPublisher. But let’s be honest: that’s the concurrency version of duct tape. Over time, the logic becomes “sleep therapy.” With each new bug, you just add more sleeps. Eventually, your pipeline is slower than a snail on holiday, and you still have race conditions. Boa..

The Solution: Make the Sequence Explicit

We decided to unify these “stages” into an explicit data flow. The approach:

  1. DataPreparationService emits an event whenever data is enriched.
  2. DataPublisher (and any other interested module) subscribes to these events in a well-defined pipeline—so it only publishes once data has been properly enriched.

This way, instead of hoping “somewhere in the call stack we already enriched the data,” we orchestrate the workflow so that you simply can’t publish until the enrichment event has occurred.

Below is a simplified version of how we refactored the architecture using Kotlin coroutines and flows.


The Class Setup

// Our domain model
data class FooDocument(
val id: Int,
val content: String,
val metadata: String? = null
)

// Represents the enriched event
sealed class EnrichmentEvent {
data class Enriched(val document: FooDocument) : EnrichmentEvent()
object Completed : EnrichmentEvent() // For closing flows gracefully
}

// Responsible for preparing data
class DataPreparationService {

// SharedFlow to broadcast enrichment updates
private val enrichmentFlow = MutableSharedFlow<EnrichmentEvent>(replay = 0)

// Expose a read-only version to subscribers
val enrichmentUpdates: SharedFlow<EnrichmentEvent> get() = enrichmentFlow

suspend fun enrichDocument(original: FooDocument) {
// Some "expensive" operation
delay(100) // simulate time for enrichment
val enriched = original.copy(metadata = "EnrichedDataFor${original.id}")
println("DataPreparationService: Document ${original.id} enriched.")

// Emit event
enrichmentFlow.emit(EnrichmentEvent.Enriched(enriched))
}

// For cleaning up or signaling completion
suspend fun signalCompletion() {
enrichmentFlow.emit(EnrichmentEvent.Completed)
}
}

// Consumes enriched docs and publishes them
class DataPublisher {

suspend fun startPublishing(
enrichmentFlow: SharedFlow<EnrichmentEvent>
) {
// Collect from the shared flow
enrichmentFlow.collect { event ->
when (event) {
is EnrichmentEvent.Enriched -> {
println("DataPublisher: Sending out doc ${event.document.id} with metadata = ${event.document.metadata}")
// Pretend to call some external API
callExternalAPI(event.document)
}
is EnrichmentEvent.Completed -> {
println("DataPublisher: Enrichment completed. Stopping publisher.")
return // End collection
}
}
}
}

private suspend fun callExternalAPI(document: FooDocument) {
// Some fake external call
delay(50)
println("DataPublisher: External API responded for doc ${document.id}")
}
}

How It Works

  1. DataPreparationService has a MutableSharedFlow<EnrichmentEvent>. It emits an event whenever a new document is enriched.
  2. DataPublisher subscribes to enrichmentFlow. Any new event triggers the publisher’s logic—ensuring that, by the time it tries to publish, data is actually enriched.
  3. If we need multiple services (e.g., logging, caching, etc.), they can also subscribe to enrichmentUpdates. Everyone sees the same event once it’s emitted.

Putting It All Together

fun main() = runBlocking {
val preparationService = DataPreparationService()
val publisher = DataPublisher()

// Launch publisher as a background coroutine
val publisherJob = launch {
publisher.startPublishing(preparationService.enrichmentUpdates)
}

// Let's create some artificial documents
val docs = listOf(
FooDocument(id = 1, content = "Hello World"),
FooDocument(id = 2, content = "Lorem Ipsum"),
FooDocument(id = 3, content = "Dolor Sit Amet")
)

// Pretend we're receiving these docs from some random pipeline
launch {
docs.forEach { doc ->
// Notice we do NOT publish here directly anymore;
// we trust the service to handle enrichment first.
preparationService.enrichDocument(doc)
}
// Signal done
preparationService.signalCompletion()
}

// Wait until the publisher finishes
publisherJob.join()
println("main: All done. No more uncertain race conditions!")
}

What Changed?

  • In the old approach, DataPublisher tried to publish as soon as it “heard” there was a doc to publish, not waiting for enrichment. The timing was implicit: “We hope DataPreparationService is probably done.”
  • Now, we introduced an explicit dependency via SharedFlow<EnrichmentEvent>. If DataPublisher doesn’t see an Enriched event, it does not attempt to publish that document.

Why This Worked So Well

  1. Explicitness: No more “did that finishing call happen first?” We replaced an implicit timing assumption with a real data-driven approach.
  2. Scalability: Multiple consumers can subscribe to the same flow. If tomorrow we add DataArchiver, it just collects from enrichmentFlow as well.
  3. Testability: We can unit-test DataPreparationService by collecting from enrichmentFlow in memory. We verify events are emitted for each doc. We can similarly test DataPublisher by providing a mock flow of events.
  4. No Sleep Therapy: Instead of sprinkling random delay(1000) calls, we rely on real synchronization signals. If docs are enriched, an event is fired. If not, it’s not. Simple.

Moral of the Story: If your concurrency design has you guessing about who’s done first, it’s probably time to explicitly model those dependencies. Shared flows, channels, or well-defined pipeline stages give you back control. Once we made that shift, the random, after-midnight production failures said “goodbye” faster than a cat passing an open door.

And that’s one more scalp in the war on implicit temporal coupling. Or, in less dramatic terms: “Make concurrency explicit; you’ll save yourself a lot of time, hair, and coffee in the long run.”

Happy coding, and may your flows always be in the correct order!

Leave a Reply

Your email address will not be published. Required fields are marked *