STDIN Pipelines
Pipe output from any command directly to zo for AI analysis. Works seamlessly with Unix philosophy to make AI a natural part of your command pipelines.
Overview
STDIN pipeline integration allows you to:
- Analyze command output without intermediate files
- Combine piped input with prompts and file references
- Build complex data processing workflows
- Integrate AI into automation and scripting
How It Works
zo uses the IsTerminal trait to detect piped input automatically. When STDIN is piped (not interactive), zo reads it and combines it with your prompt for the AI model.
Message Construction
When you pipe input to zo, the final message sent to the AI combines:
- File references (if any
@filenamesyntax used) - Your prompt (the text you provide)
- STDIN content (the piped input)
This ordering provides context first, then the question, then the data to analyze.
Basic Usage
Simple Pipeline
# Pipe command output directly
echo "Hello World" | zo 'Translate to Spanish'
# Pipe file contents
cat error.log | zo 'Summarize the errors'
# Pipe stderr too
cargo build 2>&1 | zo 'Explain these errors'With Model Selection
# Use specific model with piped input
git diff | zo /sonnet 'Review these changes'
# CLI flag also works
ps aux | zo --model opus 'Which processes are problematic?'With File References
# Combine STDIN with file references
git diff | zo @README.md 'Do these changes require README updates?'
# Multiple files plus STDIN
cat data.csv | zo @schema.sql @config.toml 'Validate this data'Common Use Cases
Code Review from Git
# Review uncommitted changes
git diff | zo 'Summarize these changes and check for bugs'
# Review staged changes
git diff --cached | zo /reviewer 'Review before I commit'
# Review specific commit
git show HEAD | zo 'Explain what this commit does and why'
# Review PR changes
git diff main...feature | zo 'Generate detailed PR review'
# Review recent commits
git log --oneline -20 | zo 'Summarize recent development activity'Error Analysis
# Rust build errors
cargo build 2>&1 | zo /sonnet 'Explain these errors and how to fix them'
# JavaScript test failures
npm test 2>&1 | zo 'Why are these tests failing?'
# Python runtime errors
python script.py 2>&1 | zo /debugger 'Root cause analysis'
# Go compilation errors
go build 2>&1 | zo 'Fix these compilation errors'
# Make errors
make 2>&1 | zo 'What is wrong with my build?'Log Analysis
# Application logs
tail -n 100 /var/log/nginx/error.log | zo 'Find patterns in these errors'
# Docker logs
docker logs myapp | zo 'Identify the root cause of the crash'
# System logs
journalctl -n 100 | zo 'Any security concerns?'
# Custom app logs
tail -f app.log | head -50 | zo 'Summarize recent activity'
# Multi-file logs
cat *.log | zo 'Find anomalies across all logs'Data Processing
# CSV analysis
cat large_file.csv | zo 'Calculate summary statistics'
# JSON extraction
curl -s https://api.example.com/data | zo 'Extract all email addresses'
# Data transformation
cat input.json | zo 'Convert this to CSV format'
# Data validation
cat data.csv | zo @schema.json 'Validate against this schema'
# SQL generation from data
head -10 sample.csv | zo 'Generate SQL CREATE TABLE for this data'System Diagnostics
# Memory usage
ps aux --sort=-%mem | head -20 | zo 'Which processes are using too much memory?'
# Disk space
df -h | zo 'Do I have any disk space issues?'
# Network connections
netstat -tuln | zo 'Any suspicious connections?'
# Process tree
pstree -p | zo 'Explain what is running on this system'
# System resources
top -b -n 1 | head -20 | zo 'System health check'Documentation
# Man page summaries
man grep | zo 'Explain the most useful grep options with examples'
# API documentation
curl -s https://docs.rust-lang.org/book/ch04-00-understanding-ownership.html \
| zo 'Summarize the key concepts'
# Code documentation
cargo doc --no-deps 2>&1 | zo 'Any doc warnings I should fix?'
# README generation
ls -la | zo 'Generate directory structure documentation'Advanced Workflows
Multi-Stage Pipelines
# Filter → Transform → Analyze
cat access.log \
| grep "ERROR" \
| zo 'Categorize these errors and suggest fixes'
# Fetch → Parse → Summarize
curl -s https://api.github.com/repos/rust-lang/rust/issues \
| jq '.[].title' \
| zo 'Summarize trending issues'
# Process → Validate → Report
cat data.csv \
| awk -F',' '{print $1, $3}' \
| zo 'Check for invalid entries'Combining with Chat Mode
# Start chat with piped context
cat error.log | zo --chat 'Let us debug this together'
> What should I check first?
> I tried that, what else?
> exit
# Initial context from git
git diff | zo --chat /reviewer 'Let us review these changes'
> Focus on the authentication code
> What about security?
> exitAutomation Scripts
#!/bin/bash
# automated-review.sh
# Get changes
diff=$(git diff --cached)
if [ -z "$diff" ]; then
echo "No staged changes"
exit 0
fi
# AI review
echo "$diff" | zo /reviewer 'Quick review. Flag only critical issues.'
# Check exit code
if [ $? -ne 0 ]; then
echo "Review failed"
exit 1
fi
echo "Review complete"CI/CD Integration
# In .gitlab-ci.yml or similar
script:
- git diff origin/main...HEAD | zo /reviewer 'Automated PR review' > review.md
- cat review.md
- grep -qi "critical" review.md && exit 1 || exit 0Shell Integration
Add these functions to ~/.bashrc or ~/.zshrc:
Quick Helpers
# Explain command errors
fix() {
"$@" 2>&1 | zo 'Why did this fail and how do I fix it?'
}
# Analyze recent history
history-analyze() {
history | tail -20 | zo 'What have I been working on?'
}
# Commit message generator
gcm() {
git diff --cached | zo 'Generate conventional commit message'
}
# Log analyzer
logs() {
tail -n "${1:-100}" /var/log/syslog | zo 'Summarize errors'
}Usage Examples
# Use fix function
fix cargo build
# Generate commit message
gcm
# Analyze logs
logs 200Technical Details
STDIN Detection
zo uses Rust's std::io::IsTerminal trait:
if stdin.is_terminal() == false {
// STDIN is piped - read it
let mut content = String::new();
stdin.read_to_string(&mut content)?;
}Empty STDIN Handling
Empty STDIN is treated as None - no content added to the message. This prevents sending empty strings to the API.
Character Encoding
zo reads STDIN as UTF-8. Non-UTF-8 input may cause errors or be replaced with Unicode replacement characters.
Size Limits
While zo has no hard limit on STDIN size, keep in mind:
- API limits: OpenRouter models have context window limits (typically 8K-200K tokens)
- Memory: Very large inputs are loaded into memory
- Cost: Larger inputs consume more tokens
Recommendation: For files larger than 1MB, consider using file references (@filename) or filtering/sampling the data first.
Exit Codes
# Success
echo "test" | zo 'analyze'
echo $? # 0
# API error
echo "test" | zo 'analyze' # (network failure)
echo $? # 1
# Empty prompt and STDIN
echo "" | zo ""
# Error: No input providedBest Practices
✅ Do This
# Capture stderr with stdout
command 2>&1 | zo 'analyze'
# Limit large outputs
tail -n 100 huge.log | zo 'summarize'
# Combine with file context
git diff | zo @CONTRIBUTING.md 'Check against guidelines'
# Use specific models for specific tasks
error.log | zo /debugger 'diagnose'❌ Avoid This
# Don't pipe huge files without filtering
cat 10GB.log | zo 'analyze' # Too large!
# Don't forget stderr
command | zo 'fix errors' # Errors might be on stderr
# Don't pipe binary data
cat image.png | zo 'analyze' # Text models onlyTroubleshooting
No Input Received
Problem: zo says "No input provided" even though you piped data.
Solution: Check if STDIN is actually empty:
# Debug: check what's being piped
command | tee /dev/tty | zo 'analyze'
# Or save to file first
command > output.txt
cat output.txt | zo 'analyze'Truncated Output
Problem: Very long output gets truncated.
Solution: Filter or sample the data:
# Take first N lines
head -100 huge.log | zo 'analyze'
# Take last N lines
tail -100 huge.log | zo 'analyze'
# Sample every Nth line
awk 'NR % 10 == 0' huge.log | zo 'analyze'
# Filter for relevant content
grep "ERROR" huge.log | zo 'analyze'Special Characters
Problem: Special characters in output cause issues.
Solution: zo handles UTF-8 well, but for other encodings:
# Convert to UTF-8
iconv -f ISO-8859-1 -t UTF-8 file.txt | zo 'analyze'
# Remove non-printable characters
tr -cd '[:print:]\n' < file.txt | zo 'analyze'Chat Mode + STDIN
Problem: Want to use STDIN in chat mode but also type interactively.
Solution: zo uses /dev/tty for interactive input after initial STDIN:
# This works! STDIN for first message, then interactive
cat context.txt | zo --chat 'Let us discuss this'
> Follow-up question here
> exitPerformance Tips
1. Filter Before Piping
# ❌ Slow - sends everything
cat huge.log | zo 'find errors'
# ✅ Fast - filter first
grep "ERROR" huge.log | zo 'categorize'2. Use Specific Models
# Fast model for simple tasks
ps aux | zo /flash 'quick check'
# Powerful model for complex analysis
complex_output | zo /opus 'deep analysis'3. Batch Similar Queries
# ❌ Inefficient - multiple API calls
cat file1.txt | zo 'analyze'
cat file2.txt | zo 'analyze'
cat file3.txt | zo 'analyze'
# ✅ Better - one API call
cat file*.txt | zo 'analyze each section'Real-World Examples
Development Workflow
# Morning standup summary
git log --since="yesterday" --author="$(git config user.email)" \
| zo 'Summarize what I worked on yesterday'
# Pre-commit checks
git diff --cached | zo /reviewer 'Quick sanity check before commit'
# Dependency check
npm outdated | zo 'Which updates are important?'DevOps Workflow
# Container health
docker ps -a | zo 'Any containers in bad state?'
# Kubernetes debugging
kubectl logs deployment/myapp --tail=100 \
| zo 'Why is this pod crashing?'
# Resource usage
kubectl top nodes | zo 'Any resource constraints?'Data Science Workflow
# Quick stats
head -1000 dataset.csv | zo 'Basic statistics for each column'
# Data quality
cat data.csv | zo 'Check for missing values, outliers, anomalies'
# SQL generation
head -5 data.csv | zo 'Generate SQL schema for this data'Next Steps
- Chat Mode → - Multi-turn conversations
- Shell Integration → - Integrate into your workflow
- Git Workflows → - Git-specific examples
- Quick Start → - General zo usage