How to Implement LogConv in Python — Step-by-Step Tutorial
1. Goal
Implement a stable, efficient “LogConv”: convolution performed in log-domain to reduce underflow/overflow when inputs span many orders of magnitude (useful for log-likelihoods, probabilities, or extreme-value signals).
2. Key idea
Compute convolution z = xy but operate on log-values a = log(x), b = log(y). Use the log-sum-exp trick: log(z_k) = log(sum_i exp(ai + b{k-i})) = max_i (ai + b{k-i}) + log(sum_i exp((ai + b{k-i}) – M)) where M = max_i (ai + b{k-i}).
3. Dependencies
- numpy
- (optional) scipy.signal for validation
Install:
bash
pip install numpy scipy
4. Implementation (1D discrete convolution, full mode)
python
import numpy as np from scipy.signal import fftconvolve # optional for verification def logconv1d(log_x, log_y): ”“” Compute log(x * y) given log_x = log(x), log_y = log(y). Returns array log_z for full discrete convolution. “”” n = log_x.size m = log_y.size out_len = n + m - 1 log_z = np.full(out_len, -np.inf) # For each output index k, consider valid i where 0 <= i < n and 0 <= k-i < m for k in range(out_len): i_min = max(0, k - (m - 1)) i_max = min(n - 1, k) # compute pairwise sums s = ai + b{k-i} idx_x = np.arange(i_min, i_max + 1) idx_y = k - idx_x s = log_x[idx_x] + log_y[idx_y] M = np.max(s) # log-sum-exp; if M is -inf then result stays -inf if np.isneginf(M): log_z[k] = -np.inf else: log_z[k] = M + np.log(np.sum(np.exp(s - M))) return logz
5. Vectorized (faster) version using FFT for support masks
python
def logconv1d_fft_approx(log_x, log_y): ”“” Approximate log-convolution by computing convolution of exponentiated centered values. Note: This is an approximation and may lose exact log-domain stability for extreme ranges. Use for speed with caution. “”” x = np.exp(log_x - np.max(log_x)) y = np.exp(log_y - np.max(log_y)) conv = fftconvolve(x, y, mode=‘full’) log_z = np.log(conv) + (np.max(log_x) + np.max(log_y)) return logz
6. Numerical considerations
- Inputs may contain -inf for zeros; handle via isneginf checks.
- Use float64 for better precision.
- For long signals, the O(n*m) direct method is slow; use block-wise methods or approximations.
- The FFT-based approach is faster (O((n+m) log(n+m))) but can lose precision when dynamic range is huge.
7. Validation
python
# create sample x = np.array([0.1, 0.001, 2e-10]) y = np.array([1e5, 0.01, 0.2]) log_x = np.log(x) log_y = np.log(y) lz = logconv1d(log_x, log_y) z = np.exp(lz) # compare with direct convolution in normal domain (may underflow) z_direct = np.convolve(x, y, mode=‘full’) print(“log-domain conv ->”, z) print(“direct conv ->”, z_direct)
8. Extensions
- 2D convolution: extend index loops to two dimensions or use separable filters.
- Strided/Causal modes: adjust index ranges for ‘valid’ or ‘same’ outputs.
- Use jit (numba) or C++ to accelerate inner loops for large arrays.
9. Quick checklist before use
- Ensure inputs represent positive quantities; log domain requires >0 (use -inf for zeros).
- Choose direct log-sum-exp for correctness or FFT-approx for speed.
- Test against regular convolution on representative data.
Leave a Reply