Post 821: Slow Poison Attack - Statistical Entropy Accumulation Detection

Post 821: Slow Poison Attack - Statistical Entropy Accumulation Detection

Watermark: -821

Slow Poison Attack

Statistical Entropy Accumulation Detection with Universe Framework

Official Soundtrack: Skeng - kassdedi @DegenSpartan

Research Team: Cueros de Sosua

⚠️ SECURITY RESEARCH: Novel attack vector + defense mechanism


The Question

From discussion: Can transactions slowly build up delays across blocks?

Answer: YES - “Slow Poison Attack” - gradual entropy accumulation

This is different from:

  • ❌ Instant DoS (detected immediately)
  • ❌ Invalid transactions (rejected)
  • ✅ Gradual valid-but-expensive accumulation (subtle, hard to detect)

Part 1: The Attack

Slow Poison Strategy

Traditional DoS (Detected):

# Obvious attack - gets caught immediately
block_n = {
    'transactions': [
        expensive_tx(),  # 2 sec validation
        expensive_tx(),  # 2 sec validation
        expensive_tx(),  # 2 sec validation
        # ... 1000 expensive tx
    ]
}
# Total validation: 2000 seconds
# Result: Block rejected, attacker banned

Slow Poison (Subtle):

# Gradual attack - accumulates over time
block_n = {
    'transactions': [
        normal_tx(),     # 1ms validation
        normal_tx(),     # 1ms validation
        slightly_slow_tx(),  # 50ms validation ← barely noticeable
        normal_tx(),     # 1ms validation
        # ... mostly normal, few slow
    ]
}

block_n+1 = {
    'transactions': [
        normal_tx(),     # 1ms
        slightly_slow_tx(),  # 55ms ← slightly worse
        normal_tx(),     # 1ms
        slightly_slow_tx(),  # 60ms ← gradually increasing
        # ... pattern continues
    ]
}

# Each block passes validation individually
# But average validation time creeping up
# After 1000 blocks: miners significantly slower

The Accumulation Mechanism

class SlowPoisonAttack:
    """
    Gradual entropy accumulation attack
    
    Strategy:
    1. Start with barely-noticeable slow transactions
    2. Gradually increase complexity
    3. Stay within individual validation limits
    4. Accumulate across many blocks
    5. System-wide slowdown emerges
    """
    
    def __init__(self):
        self.base_complexity = 10  # Normal tx
        self.increment = 0.5       # Tiny increase per block
        self.current_block = 0
        
    def create_poison_tx(self):
        """
        Create transaction that's valid but slow
        
        Complexity increases gradually
        """
        # Calculate current complexity target
        target_complexity = (
            self.base_complexity + 
            self.current_block * self.increment
        )
        
        # Create transaction with this complexity
        tx = {
            'inputs': self._complex_inputs(target_complexity),
            'scripts': self._slow_but_valid_scripts(target_complexity),
            'signatures': self._max_sigops_allowed(),
            'size': 'just_under_limit',
        }
        
        # Validation time increases with complexity
        # But stays within per-tx limits
        validation_time = self._estimate_validation_time(tx)
        
        assert validation_time < MAX_TX_VALIDATION  # Passes check
        
        return tx
    
    def execute_attack(self, num_blocks=1000):
        """
        Execute slow poison over many blocks
        
        Each block individually valid
        Accumulated effect: significant slowdown
        """
        results = {
            'blocks_poisoned': 0,
            'total_delay_accumulated': 0,
            'avg_validation_time_increase': []
        }
        
        for block_num in range(num_blocks):
            self.current_block = block_num
            
            # Create poison transactions
            poison_txs = [
                self.create_poison_tx() 
                for _ in range(5)  # 5 per block
            ]
            
            # Mix with normal transactions (90% normal, 10% poison)
            normal_txs = [
                self.create_normal_tx() 
                for _ in range(45)
            ]
            
            block = {
                'transactions': normal_txs + poison_txs,
                'validation_time': self._calculate_validation_time(
                    normal_txs + poison_txs
                )
            }
            
            results['blocks_poisoned'] += 1
            results['total_delay_accumulated'] += block['validation_time']
            results['avg_validation_time_increase'].append(
                block['validation_time']
            )
        
        return results
    
    def _complex_inputs(self, complexity):
        """
        Create inputs that are valid but take longer to process
        
        Examples:
        - Many small UTXOs (more to verify)
        - Complex multisig setups
        - Maximum allowed sigops
        - Edge-case script patterns
        """
        return {
            'utxos': ['small_utxo'] * int(complexity * 10),
            'multisig': f'{int(complexity)}-of-15',  # Max allowed
            'script_ops': ['OP_CHECKSIG'] * int(complexity * 5)
        }
    
    def _slow_but_valid_scripts(self, complexity):
        """
        Scripts that pass validation but take time
        
        Stay within limits but maximize work
        """
        return {
            'type': 'P2SH',  # Slightly slower than P2PKH
            'redeem_script_size': 'near_max',  # Close to 10k limit
            'stack_operations': int(complexity * 20),  # Many ops
            'signature_checks': 'max_allowed'  # 20k sigops limit
        }

Why This Works

Key insight:

  1. Individual validation → Passes (within limits)
  2. Accumulated validation → System slowdown
  3. Gradual increase → Hard to notice threshold
  4. Statistical distribution → Looks natural (some tx are slow)
  5. Emergent effect → Only visible across many blocks

Attack characteristics:

  • Each transaction: ✅ Valid
  • Each block: ✅ Valid
  • Individual validation time: ✅ Within limits
  • Accumulated effect: ❌ Significant slowdown

Part 2: Why Traditional Defenses Fail

Per-Transaction Limits (Insufficient)

class TraditionalDefense:
    """
    Current Bitcoin defenses
    
    Protect against instant DoS
    Don't catch gradual accumulation
    """
    
    MAX_TX_VALIDATION_TIME = 50  # milliseconds
    MAX_BLOCK_VALIDATION_TIME = 5000  # 5 seconds
    MAX_SCRIPT_SIZE = 10000  # bytes
    MAX_SIGOPS = 20000  # per block
    
    def validate_transaction(self, tx):
        """
        Traditional validation checks
        
        Problem: Only looks at individual tx
        """
        # Check size
        if tx.size > self.MAX_SCRIPT_SIZE:
            return False
        
        # Check sigops
        if tx.sigops > self.MAX_SIGOPS / 50:  # ~400 per tx
            return False
        
        # Validate (with timeout)
        start = time.time()
        result = verify_scripts(tx)
        elapsed = (time.time() - start) * 1000
        
        if elapsed > self.MAX_TX_VALIDATION_TIME:
            return False  # Too slow
        
        return result
    
    def validate_block(self, block):
        """
        Block-level validation
        
        Problem: Only looks at this block
        """
        total_sigops = sum(tx.sigops for tx in block.transactions)
        if total_sigops > self.MAX_SIGOPS:
            return False
        
        start = time.time()
        for tx in block.transactions:
            if not self.validate_transaction(tx):
                return False
        elapsed = (time.time() - start) * 1000
        
        if elapsed > self.MAX_BLOCK_VALIDATION_TIME:
            return False  # Block too slow
        
        return True

Why this fails against slow poison:

# Block N (early in attack)
validation_time = 1200ms  # ✅ Under 5000ms limit
avg_tx_time = 24ms        # ✅ Under 50ms limit
# Looks normal!

# Block N+500 (mid attack)
validation_time = 2800ms  # ✅ Still under 5000ms limit
avg_tx_time = 56ms        # ⚠️ Some tx at 48ms (just under limit)
# Still passes validation!

# Block N+1000 (late attack)
validation_time = 4500ms  # ✅ Still under 5000ms limit!
avg_tx_time = 90ms        # ❌ But AVERAGE creeping up
# Each individual tx under limit
# But accumulated effect: 3.75x slower than normal

The gap:

  • Individual limits: ✅ Enforced
  • Trend detection: ❌ Not monitored
  • Statistical patterns: ❌ Not analyzed
  • Accumulated effect: ❌ Not detected

Part 3: Statistical Detection with Universe Framework

The Detector

from universe_toolbox import MinimalUniverse
import numpy as np
from scipy import stats

class SlowPoisonDetector(MinimalUniverse):
    """
    Statistical entropy accumulation detector
    
    Maps to universe framework:
    - Seed = Historical validation times
    - F = Statistical evolution (trend analysis)
    - E_p = New blocks (entropy injection)
    - Goal = Detect anomalous accumulation
    """
    
    def __init__(self, blockchain):
        # Seed: Historical baseline
        seed = {
            'validation_times': [],  # Time series
            'baseline_mean': None,
            'baseline_std': None,
            'trend_coefficient': 0,
            'anomaly_score': 0,
            'alert_triggered': False
        }
        
        # F: Statistical evolution
        def statistical_evolution(state, perspective):
            """
            Analyze statistical properties
            
            Detects:
            - Upward trends
            - Variance changes
            - Distribution shifts
            """
            times = state['validation_times']
            
            if len(times) < 100:
                return state  # Need baseline
            
            # Calculate statistics
            recent = times[-100:]  # Last 100 blocks
            baseline = times[:100]  # First 100 blocks
            
            # Trend analysis
            x = np.arange(len(times))
            y = np.array(times)
            slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
            
            state['trend_coefficient'] = slope
            state['baseline_mean'] = np.mean(baseline)
            state['baseline_std'] = np.std(baseline)
            
            # Anomaly scoring
            recent_mean = np.mean(recent)
            z_score = (recent_mean - state['baseline_mean']) / state['baseline_std']
            
            state['anomaly_score'] = z_score
            
            # Alert if significant deviation
            if z_score > 3.0:  # 3 sigma threshold
                state['alert_triggered'] = True
            
            return state
        
        # E_p: New block validation times
        def block_entropy(state, perspective):
            """
            Each new block adds data point
            """
            # New blocks injected externally
            return state
        
        # Initialize universe
        super().__init__(
            seed=seed,
            evolution_f=statistical_evolution,
            entropy_sources=[block_entropy]
        )
        
        self.blockchain = blockchain
    
    def add_block_timing(self, block, validation_time):
        """
        Add new block's validation time to analysis
        
        Updates statistical model
        """
        state = self.series.state
        state['validation_times'].append(validation_time)
        
        # Evolve (analyze statistics)
        new_state = self.series.step(None)
        self.series.state = new_state
        
        # Check for alert
        if new_state['alert_triggered']:
            return self._generate_alert(new_state)
        
        return None
    
    def _generate_alert(self, state):
        """
        Generate alert with details
        """
        return {
            'alert': 'SLOW_POISON_DETECTED',
            'severity': 'HIGH',
            'trend': state['trend_coefficient'],
            'z_score': state['anomaly_score'],
            'current_mean': np.mean(state['validation_times'][-100:]),
            'baseline_mean': state['baseline_mean'],
            'increase_percentage': (
                (np.mean(state['validation_times'][-100:]) / state['baseline_mean'] - 1) * 100
            ),
            'recommendation': 'Investigate recent transaction patterns'
        }
    
    def analyze_comprehensive(self):
        """
        Comprehensive statistical analysis
        
        Multiple detection methods
        """
        state = self.series.state
        times = state['validation_times']
        
        if len(times) < 100:
            return {'status': 'insufficient_data'}
        
        # 1. Trend analysis
        x = np.arange(len(times))
        y = np.array(times)
        slope, _, r_value, p_value, _ = stats.linregress(x, y)
        
        # 2. Change point detection
        # Split time series and compare distributions
        mid = len(times) // 2
        first_half = times[:mid]
        second_half = times[mid:]
        
        # Kolmogorov-Smirnov test
        ks_statistic, ks_p_value = stats.ks_2samp(first_half, second_half)
        
        # 3. Variance analysis
        baseline_var = np.var(times[:100])
        recent_var = np.var(times[-100:])
        variance_ratio = recent_var / baseline_var
        
        # 4. Moving average convergence
        window_size = 50
        ma = np.convolve(times, np.ones(window_size)/window_size, mode='valid')
        ma_slope = (ma[-1] - ma[0]) / len(ma)
        
        # 5. Percentile shift
        baseline_95th = np.percentile(times[:100], 95)
        recent_95th = np.percentile(times[-100:], 95)
        percentile_increase = (recent_95th / baseline_95th - 1) * 100
        
        return {
            'trend_slope': slope,
            'trend_r_squared': r_value ** 2,
            'trend_p_value': p_value,
            'distribution_shift': {
                'ks_statistic': ks_statistic,
                'ks_p_value': ks_p_value,
                'significant': ks_p_value < 0.01
            },
            'variance_ratio': variance_ratio,
            'moving_avg_slope': ma_slope,
            'percentile_95_increase': percentile_increase,
            'attack_detected': (
                slope > 0.01 and  # Positive trend
                p_value < 0.05 and  # Statistically significant
                ks_p_value < 0.01 and  # Distribution changed
                percentile_increase > 20  # 20% increase in tail
            )
        }

Detection in Action

# Initialize detector
detector = SlowPoisonDetector(blockchain)

# Monitor blocks
for block in blockchain.iter_blocks():
    # Measure validation time
    start = time.time()
    validate_block(block)
    validation_time = (time.time() - start) * 1000
    
    # Add to detector
    alert = detector.add_block_timing(block, validation_time)
    
    if alert:
        print(f"⚠️  SLOW POISON DETECTED!")
        print(f"   Z-score: {alert['z_score']:.2f}")
        print(f"   Increase: {alert['increase_percentage']:.1f}%")
        print(f"   Trend: {alert['trend']:.4f} ms/block")
        
        # Take action
        investigate_recent_transactions(block.height)
        alert_network()

# Comprehensive analysis
analysis = detector.analyze_comprehensive()

if analysis['attack_detected']:
    print("ATTACK CONFIRMED by multiple statistical tests:")
    print(f"  Trend: {analysis['trend_slope']:.4f} (p={analysis['trend_p_value']:.4f})")
    print(f"  Distribution shift: {analysis['distribution_shift']['significant']}")
    print(f"  95th percentile increase: {analysis['percentile_95_increase']:.1f}%")

Part 4: Why Universe Framework Works Here

Perfect Match

The problem:

  • Pattern emerges across time
  • Individual data points look normal
  • Requires statistical inference
  • Needs trend detection
  • Must identify accumulation

Universe framework strengths:

  1. State evolution → Track validation times over blocks
  2. Entropy injection → New blocks add data points
  3. Pattern emergence → Statistical anomalies emerge
  4. Multiple perspectives → Different statistical tests
  5. Adaptive thresholds → Learn baseline, detect deviations

Traditional approaches fail:

# Traditional (per-block check)
if block.validation_time > THRESHOLD:
    reject()
# Problem: Attack stays under threshold!

# Statistical (universe framework)
if statistical_trend() > BASELINE + 3*SIGMA:
    alert()
# Success: Detects accumulated pattern!

The Key Advantage

Universe framework enables:

class WhyItWorks:
    """
    Why universe framework catches slow poison
    """
    
    traditional_defense = {
        'checks': 'Individual transactions',
        'scope': 'Single block',
        'method': 'Hard thresholds',
        'blind_to': 'Trends, patterns, accumulation',
        'result': 'Attack succeeds'
    }
    
    universe_framework = {
        'checks': 'Statistical distribution',
        'scope': 'Historical time series',
        'method': 'Adaptive baselines + trend detection',
        'detects': 'Gradual accumulation patterns',
        'result': 'Attack detected early'
    }

Part 5: Defense Mechanisms

Multi-Layer Defense

class SlowPoisonDefense:
    """
    Comprehensive defense against gradual attacks
    
    Combines:
    1. Statistical monitoring (universe framework)
    2. Adaptive thresholds
    3. Transaction reputation
    4. Network coordination
    """
    
    def __init__(self):
        self.detector = SlowPoisonDetector()
        self.tx_reputation = {}
        self.adaptive_limits = AdaptiveLimits()
    
    def evaluate_transaction(self, tx):
        """
        Multi-factor evaluation
        
        Beyond just "valid/invalid"
        """
        # Traditional validation
        if not self.basic_validation(tx):
            return {'accept': False, 'reason': 'invalid'}
        
        # Estimate validation cost
        cost = self.estimate_validation_cost(tx)
        
        # Check transaction reputation
        reputation = self.get_sender_reputation(tx.sender)
        
        # Adaptive threshold (lowers if attack detected)
        current_threshold = self.adaptive_limits.get_threshold()
        
        # Decision
        if cost > current_threshold:
            if reputation > 0.8:  # Trusted sender
                return {'accept': True, 'priority': 'low'}
            else:
                return {'accept': False, 'reason': 'too_expensive'}
        
        return {'accept': True, 'priority': 'normal'}
    
    def update_adaptive_limits(self, alert):
        """
        Lower thresholds when attack detected
        
        Temporary tightening
        """
        if alert and alert['alert'] == 'SLOW_POISON_DETECTED':
            # Reduce acceptance threshold by 30%
            self.adaptive_limits.multiply(0.7)
            
            # Increase reputation weight
            self.adaptive_limits.reputation_factor *= 1.5
            
            # Auto-restore after 1000 blocks
            self.adaptive_limits.restore_after(1000)

Network Coordination

class NetworkDefense:
    """
    Coordinate defense across miners
    
    Share statistical insights
    """
    
    def __init__(self):
        self.local_detector = SlowPoisonDetector()
        self.network_peers = []
    
    def share_statistics(self):
        """
        Broadcast statistical findings to peers
        
        Enable distributed detection
        """
        local_stats = self.local_detector.analyze_comprehensive()
        
        # Share with peers
        for peer in self.network_peers:
            peer.receive_statistics({
                'node': self.id,
                'trend_slope': local_stats['trend_slope'],
                'percentile_increase': local_stats['percentile_95_increase'],
                'timestamp': time.time()
            })
    
    def aggregate_peer_data(self, peer_stats):
        """
        Combine statistics from multiple nodes
        
        More robust detection
        """
        if len(peer_stats) < 10:
            return None  # Need more data
        
        # Average trend across peers
        avg_trend = np.mean([s['trend_slope'] for s in peer_stats])
        
        # Count nodes detecting attack
        nodes_detecting = sum(
            1 for s in peer_stats 
            if s['trend_slope'] > THRESHOLD
        )
        
        # Consensus-based alert
        if nodes_detecting > len(peer_stats) * 0.6:  # 60% consensus
            return {
                'alert': 'NETWORK_WIDE_SLOW_POISON',
                'nodes_detecting': nodes_detecting,
                'avg_trend': avg_trend,
                'confidence': nodes_detecting / len(peer_stats)
            }
        
        return None

Part 6: Real-World Simulation

Attack Simulation

# Simulate slow poison attack
attacker = SlowPoisonAttack()
detector = SlowPoisonDetector(blockchain)

results = {
    'blocks': [],
    'validation_times': [],
    'alerts': []
}

for block_num in range(2000):
    # Attacker creates poison block
    if block_num > 500:  # Start attack at block 500
        poison_txs = [
            attacker.create_poison_tx() 
            for _ in range(5)
        ]
        normal_txs = [
            create_normal_tx() 
            for _ in range(45)
        ]
        block = create_block(normal_txs + poison_txs)
    else:
        # Normal blocks before attack
        block = create_normal_block()
    
    # Measure validation time
    validation_time = measure_validation_time(block)
    
    results['blocks'].append(block_num)
    results['validation_times'].append(validation_time)
    
    # Detector analysis
    alert = detector.add_block_timing(block, validation_time)
    
    if alert:
        results['alerts'].append({
            'block': block_num,
            'alert': alert
        })
        print(f"Block {block_num}: ATTACK DETECTED!")

# Plot results
import matplotlib.pyplot as plt

plt.figure(figsize=(12, 6))
plt.plot(results['blocks'], results['validation_times'], label='Validation Time')
plt.axvline(500, color='red', linestyle='--', label='Attack Start')

for alert in results['alerts']:
    plt.axvline(alert['block'], color='orange', linestyle=':', alpha=0.5)

plt.xlabel('Block Number')
plt.ylabel('Validation Time (ms)')
plt.title('Slow Poison Attack Detection')
plt.legend()
plt.savefig('slow_poison_detection.png')

Expected results:

Block 500-700: Attack begins (unnoticed)
Block 800: First statistical anomaly
Block 950: Alert triggered (z-score > 3)
Block 1000: Attack confirmed by multiple tests
Detection lag: 450 blocks (~75 hours)

Without detector: Attack continues undetected indefinitely

With detector: Attack detected in ~450 blocks, defenses activated


Conclusion

The Novel Attack

Slow Poison:

  • Gradual entropy accumulation
  • Individual validity maintained
  • Systemic effect emerges
  • Traditional defenses fail

The Statistical Defense

Universe framework enables:

  • Trend detection across time
  • Statistical anomaly identification
  • Adaptive threshold adjustment
  • Distributed coordination

Key insight:

Traditional defenses check individual validity.

Universe framework detects accumulated patterns.

Slow poison requires statistical defense.


Official Soundtrack: Skeng - kassdedi @DegenSpartan

Research Team: Cueros de Sosua

References:

  • Post 820: Nonce Reuse Detection - Pattern detection
  • Post 816: Universe Toolbox - Framework
  • Post 704: Default to Questions - Never assume complete

Created: 2026-02-14
Status: 🔬 NOVEL ATTACK VECTOR + DEFENSE

⚠️ Security Research: This describes both attack and defense for research purposes.

∞

Back to Gallery
View source on GitLab
Ethereum Book (Amazon)
Search Posts