Logging: Levels, Handlers, Formatters & Structured Logging
📖 Concept
Python's logging module is a production-grade, highly configurable logging framework built into the standard library. It supports hierarchical loggers, multiple output destinations (handlers), customizable message formatting, and filtering — all without third-party dependencies.
Why not print()?
- No log levels (can't filter debug vs error)
- No timestamps, source file, or line numbers
- No way to route to files, syslog, or external services
- Can't disable without removing code
- Not thread-safe for concurrent writes
Log levels (in order of severity):
| Level | Value | Use Case |
|---|---|---|
DEBUG |
10 | Detailed diagnostic info (variable values, flow tracing) |
INFO |
20 | Confirmation that things are working (startup, shutdown, milestones) |
WARNING |
30 | Something unexpected but not broken (deprecated API, retry, fallback) |
ERROR |
40 | Something failed but the app continues (failed request, missing resource) |
CRITICAL |
50 | The app cannot continue (out of memory, database down, corrupted state) |
Architecture: The logging system has four key components:
- Loggers — the API you call (
logger.info("msg")). Organized in a dot-separated hierarchy (app.db.queries). - Handlers — where logs go (
StreamHandler,FileHandler,RotatingFileHandler,SysLogHandler, etc.). - Formatters — how log messages look (timestamp format, included fields).
- Filters — fine-grained control over which records are emitted.
Best practice: Always use logging.getLogger(__name__) to create module-level loggers. This creates a logger hierarchy matching your package structure, allowing fine-grained configuration. Never use the root logger (logging.info()) in library code.
💻 Code Example
1# ============================================================2# Basic logging setup — the minimum viable configuration3# ============================================================4import logging56# BAD — using print for diagnostics7# print(f"Processing user {user_id}") # No level, no timestamp, no filtering89# BAD — using root logger in a module/library10# logging.info("...") # Pollutes the root logger, hard to configure1112# GOOD — module-level logger with __name__13logger = logging.getLogger(__name__)1415# Quick setup for scripts (NOT for libraries)16logging.basicConfig(17 level=logging.DEBUG,18 format="%(asctime)s | %(levelname)-8s | %(name)s | %(message)s",19 datefmt="%Y-%m-%d %H:%M:%S",20)2122# Using different levels23logger.debug("Connecting to database at %s:%d", "localhost", 5432)24logger.info("Application started successfully")25logger.warning("Cache miss for key %r, falling back to DB", "user:123")26logger.error("Failed to process payment for order %s", "ORD-456")27logger.critical("Database connection pool exhausted — shutting down")282930# ============================================================31# Production logging configuration — programmatic setup32# ============================================================33import logging34import logging.handlers35import sys36from pathlib import Path373839def setup_logging(40 log_dir: str = "logs",41 level: str = "INFO",42 max_bytes: int = 10 * 1024 * 1024, # 10 MB43 backup_count: int = 5,44):45 """Configure production-grade logging with console + rotating file."""46 log_path = Path(log_dir)47 log_path.mkdir(parents=True, exist_ok=True)4849 # Create root logger50 root_logger = logging.getLogger()51 root_logger.setLevel(getattr(logging, level.upper()))5253 # Clear existing handlers (prevents duplicate logs on re-init)54 root_logger.handlers.clear()5556 # --- Console handler (human-readable) ---57 console_handler = logging.StreamHandler(sys.stderr)58 console_handler.setLevel(logging.INFO)59 console_formatter = logging.Formatter(60 fmt="%(asctime)s | %(levelname)-8s | %(name)s:%(lineno)d | %(message)s",61 datefmt="%H:%M:%S",62 )63 console_handler.setFormatter(console_formatter)6465 # --- Rotating file handler (detailed, for debugging) ---66 file_handler = logging.handlers.RotatingFileHandler(67 filename=log_path / "app.log",68 maxBytes=max_bytes,69 backupCount=backup_count,70 encoding="utf-8",71 )72 file_handler.setLevel(logging.DEBUG)73 file_formatter = logging.Formatter(74 fmt="%(asctime)s | %(levelname)-8s | %(name)s:%(funcName)s:%(lineno)d | %(message)s",75 datefmt="%Y-%m-%d %H:%M:%S",76 )77 file_handler.setFormatter(file_formatter)7879 # --- Error-only file handler (for alerts/monitoring) ---80 error_handler = logging.handlers.RotatingFileHandler(81 filename=log_path / "errors.log",82 maxBytes=max_bytes,83 backupCount=backup_count,84 encoding="utf-8",85 )86 error_handler.setLevel(logging.ERROR)87 error_handler.setFormatter(file_formatter)8889 # Attach all handlers90 root_logger.addHandler(console_handler)91 root_logger.addHandler(file_handler)92 root_logger.addHandler(error_handler)9394 # Silence noisy third-party loggers95 logging.getLogger("urllib3").setLevel(logging.WARNING)96 logging.getLogger("sqlalchemy.engine").setLevel(logging.WARNING)9798 root_logger.info("Logging initialized (level=%s, dir=%s)", level, log_dir)99100101# Initialize once at application startup102# setup_logging(level="DEBUG")103104105# ============================================================106# Logging with structured context — the extra parameter107# ============================================================108logger = logging.getLogger("app.orders")109110def process_order(order_id: str, user_id: str, amount: float):111 """Use structured logging with extra context."""112 log = logging.LoggerAdapter(logger, {"order_id": order_id, "user_id": user_id})113114 log.info("Processing order for $%.2f", amount)115116 try:117 # Simulate payment118 if amount > 10000:119 raise ValueError("Amount exceeds single transaction limit")120 log.info("Payment processed successfully")121 except ValueError as e:122 # exc_info=True includes the full traceback in the log123 log.error("Payment failed: %s", e, exc_info=True)124 raise125126127# ============================================================128# JSON structured logging — for log aggregation systems129# ============================================================130import json131import traceback132133class JSONFormatter(logging.Formatter):134 """Format log records as JSON for ELK/Datadog/CloudWatch."""135136 def format(self, record: logging.LogRecord) -> str:137 log_data = {138 "timestamp": self.formatTime(record, self.datefmt),139 "level": record.levelname,140 "logger": record.name,141 "message": record.getMessage(),142 "module": record.module,143 "function": record.funcName,144 "line": record.lineno,145 "process": record.process,146 "thread": record.thread,147 }148149 # Include exception info if present150 if record.exc_info and record.exc_info[0] is not None:151 log_data["exception"] = {152 "type": record.exc_info[0].__name__,153 "message": str(record.exc_info[1]),154 "traceback": traceback.format_exception(*record.exc_info),155 }156157 # Include any extra fields from LoggerAdapter or extra={}158 for key in ("order_id", "user_id", "request_id", "duration_ms"):159 if hasattr(record, key):160 log_data[key] = getattr(record, key)161162 return json.dumps(log_data, default=str)163164165# Apply JSON formatter for production166def setup_json_logging():167 handler = logging.StreamHandler(sys.stdout)168 handler.setFormatter(JSONFormatter())169 root = logging.getLogger()170 root.handlers.clear()171 root.addHandler(handler)172 root.setLevel(logging.INFO)173174175# ============================================================176# Custom filter — add request context to all log messages177# ============================================================178import threading179180# Thread-local storage for request context181_request_context = threading.local()182183class RequestContextFilter(logging.Filter):184 """Inject request_id into every log record automatically."""185186 def filter(self, record: logging.LogRecord) -> bool:187 record.request_id = getattr(_request_context, "request_id", "N/A")188 record.client_ip = getattr(_request_context, "client_ip", "N/A")189 return True # Always pass — we're adding data, not filtering190191# Usage in a web framework middleware192def handle_request(request):193 """Simulate middleware that sets request context."""194 import uuid195 _request_context.request_id = str(uuid.uuid4())[:8]196 _request_context.client_ip = "192.168.1.100"197198 logger = logging.getLogger("app.api")199 logger.addFilter(RequestContextFilter())200 logger.info("Handling request to /api/users")201 # Log output includes: request_id=a1b2c3d4, client_ip=192.168.1.100202203204# ============================================================205# Dictionary-based configuration (logging.config.dictConfig)206# ============================================================207import logging.config208209LOGGING_CONFIG = {210 "version": 1,211 "disable_existing_loggers": False,212 "formatters": {213 "standard": {214 "format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s"215 },216 "json": {217 "()": "__main__.JSONFormatter", # Use custom formatter class218 },219 },220 "handlers": {221 "console": {222 "class": "logging.StreamHandler",223 "formatter": "standard",224 "stream": "ext://sys.stderr",225 },226 "file": {227 "class": "logging.handlers.RotatingFileHandler",228 "formatter": "standard",229 "filename": "logs/app.log",230 "maxBytes": 10485760,231 "backupCount": 5,232 },233 },234 "loggers": {235 "app": {236 "handlers": ["console", "file"],237 "level": "DEBUG",238 "propagate": False,239 },240 "app.db": {241 "handlers": ["file"],242 "level": "WARNING",243 "propagate": False,244 },245 },246 "root": {247 "handlers": ["console"],248 "level": "WARNING",249 },250}251252# Apply configuration once at startup253# logging.config.dictConfig(LOGGING_CONFIG)254255256# ============================================================257# Performance: lazy formatting and guards258# ============================================================259logger = logging.getLogger("app.perf")260261# GOOD — lazy formatting (string only built if level is enabled)262logger.debug("Processing %d items: %r", len(items := [1, 2, 3]), items)263264# BAD — eager formatting (string always built, even if DEBUG is off)265# logger.debug(f"Processing {len(items)} items: {items}")266267# Guard expensive operations268if logger.isEnabledFor(logging.DEBUG):269 # Only compute expensive_debug_info if DEBUG is enabled270 debug_data = json.dumps(large_object, indent=2)271 logger.debug("Full state dump:\n%s", debug_data)
🏋️ Practice Exercise
Exercises:
Set up a production logging configuration for a web application with three handlers: console (INFO+, colored), rotating file (DEBUG+, 10MB max, 5 backups), and a separate error file (ERROR+ only). Use
dictConfig.Create a
JSONFormatterclass that outputs structured JSON log lines with fields: timestamp, level, logger name, message, and any extra context fields. Test it with aLoggerAdapterthat addsrequest_idanduser_id.Implement a custom logging
Filterthat: (a) redacts sensitive fields like passwords and tokens from log messages, (b) adds the current thread name, and (c) drops log records from a specific noisy logger.Build a
@log_calldecorator that logs function entry (with arguments), exit (with return value and duration), and exceptions (with traceback). Usefunctools.wrapsand handle both sync and async functions.Write a performance benchmark comparing: (a)
logger.debug("msg %s", val)(lazy), (b)logger.debug(f"msg {val}")(eager), and (c) guardedif logger.isEnabledFor(DEBUG)for an expensive computation. Measure withtimeitat both DEBUG and WARNING levels.Create a logging setup that sends ERROR+ logs to an external service (simulate with an HTTP POST to a local endpoint) using a
QueueHandler+QueueListenerpattern so logging never blocks the main application thread.
⚠️ Common Mistakes
Using f-strings in log calls:
logger.info(f'User {user_id} logged in'). This eagerly formats the string even if the log level is disabled. Use%-style:logger.info('User %s logged in', user_id)— formatting is deferred until the message is actually emitted.Calling
logging.basicConfig()in library code.basicConfigconfigures the ROOT logger and should only be called once in the application's entry point. Libraries should only create loggers withgetLogger(__name__)and let the application configure handlers.Not setting
propagate = Falsewhen adding handlers to child loggers. By default, log records propagate up to parent loggers. If both the child and root logger have handlers, you get duplicate log lines. Setpropagate = Falseon loggers that have their own handlers.Silencing exceptions with
logger.error(str(e))instead oflogger.error('Message', exc_info=True). The former loses the traceback entirely.exc_info=Trueincludes the full stack trace in the log, which is essential for debugging production issues.Configuring logging after importing other modules. Python's logging configuration should happen as early as possible in the application startup. Modules imported before
basicConfig/dictConfigmay already have cached logger references with default (WARNING) configuration.
💼 Interview Questions
🎤 Mock Interview
Practice a live interview for Logging: Levels, Handlers, Formatters & Structured Logging