Files
Server_Monitorizare/explanations and old code/CODE_COMPARISON.md
Developer 376240fb06 Add configuration, utilities, and update server with enhanced monitoring features
- Add config.py for environment configuration management
- Add utils.py with utility functions
- Add .env.example for environment variable reference
- Add routes_example.py as route reference
- Add login.html template for authentication
- Update server.py with enhancements
- Update all dashboard and log templates
- Move documentation to 'explanations and old code' directory
- Update database schema
2025-12-18 09:11:11 +02:00

17 KiB

Code Refactoring Examples - Side by Side Comparison

1. Authentication & Decorators

BEFORE (No Authentication)

@app.route('/logs', methods=['POST'])
@app.route('/log', methods=['POST'])
def log_event():
    try:
        # Get the JSON payload
        data = request.json
        if not data:
            return {"error": "Invalid or missing JSON payload"}, 400
        
        # Extract fields - NO VALIDATION
        hostname = data.get('hostname')
        device_ip = data.get('device_ip')
        # ... anyone can access this!

AFTER (With Authentication & Validation)

@logs_bp.route('', methods=['POST'])
@require_auth                                    # NEW: Requires API key
@log_request                                     # NEW: Logs request
@validate_required_fields(['hostname', 'device_ip'])  # NEW: Validates fields
def log_event():
    try:
        data = request.get_json()
        
        # Validate and sanitize
        hostname = sanitize_hostname(data['hostname'])  # NEW: Format validation
        if not validate_ip_address(data['device_ip']):  # NEW: IP validation
            raise APIError("Invalid IP address", 400)
        
        # Now protected and validated!

Benefits:

  • Only authorized clients can submit logs
  • Input is validated before processing
  • All requests are logged for audit trail
  • Clear error messages

2. Error Handling

BEFORE (Inconsistent Error Responses)

def log_event():
    try:
        if not data:
            return {"error": "Invalid or missing JSON payload"}, 400  # Format 1
        
        if not hostname:
            return {"error": "Missing required fields"}, 400           # Format 2
        
        # ...
    except sqlite3.Error as e:
        return {"error": f"Database connection failed: {e}"}, 500     # Format 3
    except Exception as e:
        return {"error": "An unexpected error occurred"}, 500         # Format 4

AFTER (Standardized Error Responses)

def log_event():
    try:
        if not data:
            raise APIError("Invalid or missing JSON payload", 400)    # Unified format
        
        if not hostname:
            raise APIError("Missing required fields", 400)            # Same format
        
        # ...
    except APIError as e:
        logger.error(f"API Error: {e.message}")
        return error_response(e.message, e.status_code)               # Consistent!
    except sqlite3.Error as e:
        logger.error(f"Database error: {e}", exc_info=True)
        raise APIError("Database connection failed", 500)             # Always same format
    except Exception as e:
        logger.exception("Unexpected error")
        raise APIError("Internal server error", 500)

@app.errorhandler(APIError)
def handle_api_error(e):
    return error_response(e.message, e.status_code, e.details)

Benefits:

  • All errors follow same format
  • Client can parse responses consistently
  • Errors are logged with full context
  • Easy to add monitoring/alerting

3. Logging System

BEFORE (Print Statements)

def log_event():
    try:
        #print(f"Connecting to database at: {DATABASE}")
        
        # Get the JSON payload
        data = request.json
        if not data:
            return {"error": "Invalid or missing JSON payload"}, 400
        
        #print(f"Received request data: {data}")
        
        # ... code ...
        
        print("Log saved successfully")       # Lost in terminal output
        return {"message": "Log saved successfully"}, 201

    except sqlite3.Error as e:
        print(f"Database error: {e}")          # Not structured, hard to parse
        return {"error": f"Database connection failed: {e}"}, 500

    except Exception as e:
        print(f"Unexpected error: {e}")        # No stack trace
        return {"error": "An unexpected error occurred"}, 500

AFTER (Proper Logging)

logger = logging.getLogger(__name__)

def log_event():
    try:
        logger.debug(f"Log event request from {request.remote_addr}")
        
        data = request.get_json()
        if not data:
            logger.warning("Empty JSON payload received")
            raise APIError("Invalid payload", 400)
        
        logger.debug(f"Received request data: {data}")
        
        # ... code ...
        
        logger.info(f"Log saved for {hostname} from {device_ip}")     # Structured!
        return success_response({"log_id": cursor.lastrowid}, 201)

    except sqlite3.Error as e:
        logger.error(f"Database error: {e}", exc_info=True)           # Full traceback
        raise APIError("Database connection failed", 500)

    except Exception as e:
        logger.exception("Unexpected error in log_event")             # Context included
        raise APIError("Internal server error", 500)

Log Output Example:

2025-12-17 10:30:45 - app - DEBUG - Log event request from 192.168.1.100
2025-12-17 10:30:45 - app - DEBUG - Received request data: {...}
2025-12-17 10:30:46 - app - INFO - Log saved for rpi-01 from 192.168.1.101
2025-12-17 10:30:50 - app - ERROR - Database error: unable to connect
Traceback (most recent call last):
  File "server.py", line 42, in log_event
    cursor.execute(...)
  ...

Benefits:

  • Logs go to file with rotation
  • Different severity levels (DEBUG, INFO, WARNING, ERROR)
  • Full stack traces for debugging
  • Timestamps included automatically
  • Can be parsed by log aggregation tools (ELK, Splunk, etc.)
  • Production support becomes possible

4. Configuration Management

BEFORE (Hardcoded Values)

DATABASE = 'data/database.db'           # Hardcoded path
PORT = 80                                # Hardcoded port
REQUEST_TIMEOUT = 30                     # Hardcoded timeout

# Throughout the code:
response = requests.post(url, json=payload, timeout=30)  # Magic number
with sqlite3.connect(DATABASE) as conn:  # Uses global
app.run(host='0.0.0.0', port=80)        # Hardcoded

# Problems:
# - Different values needed for dev/test/prod
# - Secret values exposed in code
# - Can't change without code changes

AFTER (Environment-Based Configuration)

# config.py
import os
from dotenv import load_dotenv

load_dotenv()

class Config:
    DATABASE_PATH = os.getenv('DATABASE_PATH', 'data/database.db')
    PORT = int(os.getenv('PORT', 80))
    REQUEST_TIMEOUT = int(os.getenv('REQUEST_TIMEOUT', 30))
    API_KEY = os.getenv('API_KEY', 'change-me')  # From .env
    DEBUG = os.getenv('DEBUG', 'False').lower() == 'true'
    LOG_LEVEL = os.getenv('LOG_LEVEL', 'INFO')

class ProductionConfig(Config):
    DEBUG = False
    LOG_LEVEL = 'INFO'

# server.py
from config import get_config
config = get_config()

response = requests.post(url, json=payload, timeout=config.REQUEST_TIMEOUT)
with sqlite3.connect(config.DATABASE_PATH) as conn:
    # ...
app.run(host='0.0.0.0', port=config.PORT)

# .env (local)
DATABASE_PATH=/var/lib/server_mon/database.db
PORT=8000
DEBUG=True
API_KEY=my-secure-key

# Benefits:
# - Same code, different configs
# - Secrets not in version control
# - Easy deployment to prod

Benefits:

  • Environment-specific configuration
  • Secrets in .env (not committed to git)
  • Easy deployment
  • No code changes needed per environment
  • Supports dev/test/prod differences

5. Input Validation

BEFORE (Minimal Validation)

def log_event():
    # Get the JSON payload
    data = request.json
    if not data:
        return {"error": "Invalid or missing JSON payload"}, 400
    
    # Extract fields from the JSON payload
    hostname = data.get('hostname')
    device_ip = data.get('device_ip')
    nume_masa = data.get('nume_masa')
    log_message = data.get('log_message')
    timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')

    # Validate required fields
    if not hostname or not device_ip or not nume_masa or not log_message:
        print("Validation failed: Missing required fields")
        return {"error": "Missing required fields"}, 400
    
    # NO FORMAT VALIDATION
    # - hostname could be very long
    # - device_ip could be invalid format
    # - log_message could contain injection payloads
    # - No type checking

AFTER (Comprehensive Validation)

from marshmallow import Schema, fields, validate, ValidationError

class LogSchema(Schema):
    """Define expected schema and validation rules"""
    hostname = fields.Str(
        required=True,
        validate=[
            validate.Length(min=1, max=255),
            validate.Regexp(r'^[a-zA-Z0-9_-]+$', error="Invalid characters")
        ]
    )
    device_ip = fields.IP(required=True)  # Validates IP format
    nume_masa = fields.Str(
        required=True,
        validate=validate.Length(min=1, max=255)
    )
    log_message = fields.Str(
        required=True,
        validate=validate.Length(min=1, max=1000)
    )

schema = LogSchema()

def log_event():
    try:
        data = schema.load(request.json)  # Auto-validates all fields
        hostname = data['hostname']        # Already validated
        device_ip = data['device_ip']     # Already validated
        
        # Data is guaranteed to be valid format
        
    except ValidationError as err:
        logger.warning(f"Validation failed: {err.messages}")
        return error_response("Validation failed", 400, err.messages)

Validation Errors (Clear Feedback):

{
    "errors": {
        "hostname": ["Length must be between 1 and 255"],
        "device_ip": ["Not a valid IP address"],
        "log_message": ["Length must be between 1 and 1000"]
    }
}

Benefits:

  • Clear validation rules (declarative)
  • Reusable schemas
  • Type checking
  • Length limits
  • Format validation (IP, email, etc.)
  • Custom validators
  • Detailed error messages for client

6. Database Queries

BEFORE (No Pagination)

@app.route('/dashboard', methods=['GET'])
def dashboard():
    with sqlite3.connect(DATABASE) as conn:
        cursor = conn.cursor()
        # Fetch the last 60 logs - loads ALL into memory
        cursor.execute('''
            SELECT hostname, device_ip, nume_masa, timestamp, event_description 
            FROM logs 
            WHERE hostname != 'SERVER' 
            ORDER BY timestamp DESC 
            LIMIT 60
        ''')
        logs = cursor.fetchall()
    return render_template('dashboard.html', logs=logs)

# Problem: As table grows to 100k rows
# - Still fetches into memory
# - Page takes longer to load
# - Memory usage grows

AFTER (With Pagination)

@logs_bp.route('/dashboard', methods=['GET'])
def dashboard():
    page = request.args.get('page', 1, type=int)
    per_page = min(
        request.args.get('per_page', config.DEFAULT_PAGE_SIZE, type=int),
        config.MAX_PAGE_SIZE
    )
    
    conn = get_db_connection(config.DATABASE_PATH)
    try:
        cursor = conn.cursor()
        
        # Get total count
        cursor.execute('SELECT COUNT(*) FROM logs WHERE hostname != "SERVER"')
        total = cursor.fetchone()[0]
        
        # Get only requested page
        offset = (page - 1) * per_page
        cursor.execute('''
            SELECT hostname, device_ip, nume_masa, timestamp, event_description 
            FROM logs 
            WHERE hostname != 'SERVER' 
            ORDER BY timestamp DESC 
            LIMIT ? OFFSET ?
        ''', (per_page, offset))
        
        logs = cursor.fetchall()
        total_pages = (total + per_page - 1) // per_page
        
        return render_template(
            'dashboard.html',
            logs=logs,
            page=page,
            total_pages=total_pages,
            total=total
        )
    finally:
        conn.close()

# Usage: /dashboard?page=1&per_page=20
# Benefits:
# - Only fetches 20 rows
# - Memory usage constant regardless of table size
# - Can navigate pages easily

Benefits:

  • Constant memory usage
  • Faster page loads
  • Can handle large datasets
  • Better UX with page navigation

7. Threading & Concurrency

BEFORE (Unbounded Threads)

@app.route('/execute_command_bulk', methods=['POST'])
def execute_command_bulk():
    try:
        data = request.json
        device_ips = data.get('device_ips', [])
        command = data.get('command')
        
        results = {}
        threads = []
        
        def execute_on_device(ip):
            results[ip] = execute_command_on_device(ip, command)
        
        # Execute commands in parallel
        for ip in device_ips:  # No limit!
            thread = threading.Thread(target=execute_on_device, args=(ip,))
            threads.append(thread)
            thread.start()  # Creates a thread for EACH IP
        
        # Wait for all threads to complete
        for thread in threads:
            thread.join()
        
        # Problem: If user sends 1000 devices, creates 1000 threads!
        # - Exhausts system memory
        # - System becomes unresponsive
        # - No control over resource usage

AFTER (ThreadPoolExecutor with Limits)

from concurrent.futures import ThreadPoolExecutor, as_completed

@app.route('/execute_command_bulk', methods=['POST'])
def execute_command_bulk():
    try:
        data = request.json
        device_ips = data.get('device_ips', [])
        command = data.get('command')
        
        # Limit threads
        max_workers = min(
            config.BULK_OPERATION_MAX_THREADS,  # e.g., 10
            len(device_ips)
        )
        
        results = {}
        
        # Use ThreadPoolExecutor with bounded pool
        with ThreadPoolExecutor(max_workers=max_workers) as executor:
            # Submit all jobs
            future_to_ip = {
                executor.submit(execute_command_on_device, ip, command): ip 
                for ip in device_ips
            }
            
            # Process results as they complete
            for future in as_completed(future_to_ip):
                ip = future_to_ip[future]
                try:
                    results[ip] = future.result()
                except Exception as e:
                    logger.error(f"Error executing command on {ip}: {e}")
                    results[ip] = {"success": False, "error": str(e)}
        
        return jsonify({"results": results}), 200

# Usage: Same API, but:
# - Max 10 threads running at once
# - Can handle 1000 devices gracefully
# - Memory usage is bounded
# - System stays responsive

Benefits:

  • Bounded thread pool (max 10)
  • No resource exhaustion
  • Graceful handling of large requests
  • Can process results as they complete

8. Response Formatting

BEFORE (Inconsistent Responses)

# Different response formats throughout
return {"message": "Log saved successfully"}, 201
return {"error": "Invalid or missing JSON payload"}, 400
return {"success": True, "status": result_data.get('status')}, 200
return {"error": error_msg}, 400
return jsonify({"results": results}), 200

# Client has to handle multiple formats
# Hard to parse responses consistently
# Hard to add metadata (timestamps, etc.)

AFTER (Standardized Responses)

# utils.py
def error_response(message, status_code=400, details=None):
    response = {
        'success': False,
        'error': message,
        'timestamp': datetime.now().isoformat()
    }
    if details:
        response['details'] = details
    return jsonify(response), status_code

def success_response(data=None, message="Success", status_code=200):
    response = {
        'success': True,
        'message': message,
        'timestamp': datetime.now().isoformat()
    }
    if data:
        response['data'] = data
    return jsonify(response), status_code

# Usage in routes
return success_response({"log_id": 123}, "Log saved successfully", 201)
return error_response("Invalid payload", 400, {"fields": ["hostname"]})
return success_response(results, message="Command executed")

# Consistent responses:
{
    "success": true,
    "message": "Log saved successfully",
    "timestamp": "2025-12-17T10:30:46.123456",
    "data": {
        "log_id": 123
    }
}

{
    "success": false,
    "error": "Invalid payload",
    "timestamp": "2025-12-17T10:30:46.123456",
    "details": {
        "fields": ["hostname"]
    }
}

Benefits:

  • All responses have same format
  • Client code is simpler
  • Easier to add logging/monitoring
  • Includes timestamp for debugging
  • Structured error details

Summary: Key Improvements at a Glance

Aspect Before After
Security No auth API key auth
Logging print() Proper logging with levels
Errors Inconsistent formats Standardized responses
Validation Basic checks Comprehensive validation
Config Hardcoded values Environment-based
Database No pagination Paginated queries
Threading Unbounded Bounded pool (max 10)
Code Structure 462 lines in 1 file Modular with blueprints
Testing No tests pytest ready
Observability None Health checks, stats, logs

Created: December 17, 2025