moved the old code folder
This commit is contained in:
Binary file not shown.
@@ -1,152 +0,0 @@
|
||||
# CSS Modular Structure Guide
|
||||
|
||||
## Overview
|
||||
This guide explains how to migrate from a monolithic CSS file to a modular CSS structure for better maintainability and organization.
|
||||
|
||||
## New CSS Structure
|
||||
|
||||
```
|
||||
app/static/css/
|
||||
├── base.css # Global styles, header, buttons, theme
|
||||
├── login.css # Login page specific styles
|
||||
├── dashboard.css # Dashboard and module cards
|
||||
├── warehouse.css # Warehouse module styles
|
||||
├── etichete.css # Labels/etiquette module styles (to be created)
|
||||
├── quality.css # Quality module styles (to be created)
|
||||
└── scan.css # Scan module styles (to be created)
|
||||
```
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### Phase 1: Setup Modular Structure ✅
|
||||
- [x] Created `css/` directory
|
||||
- [x] Created `base.css` with global styles
|
||||
- [x] Created `login.css` for login page
|
||||
- [x] Created `warehouse.css` for warehouse module
|
||||
- [x] Updated `base.html` to include modular CSS
|
||||
- [x] Updated `login.html` to use new structure
|
||||
|
||||
### Phase 2: Migration Plan (Next Steps)
|
||||
|
||||
1. **Extract module-specific styles from style.css:**
|
||||
- Etiquette/Labels module → `etichete.css`
|
||||
- Quality module → `quality.css`
|
||||
- Scan module → `scan.css`
|
||||
|
||||
2. **Update templates to use modular CSS:**
|
||||
```html
|
||||
{% block head %}
|
||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/module-name.css') }}">
|
||||
{% endblock %}
|
||||
```
|
||||
|
||||
3. **Clean up original style.css:**
|
||||
- Remove extracted styles
|
||||
- Keep only legacy/common styles temporarily
|
||||
- Eventually eliminate when all modules migrated
|
||||
|
||||
## Template Usage Pattern
|
||||
|
||||
### Standard Template Structure:
|
||||
```html
|
||||
{% extends "base.html" %}
|
||||
{% block title %}Page Title{% endblock %}
|
||||
|
||||
{% block head %}
|
||||
<!-- Include module-specific CSS -->
|
||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/module-name.css') }}">
|
||||
<!-- Page-specific overrides -->
|
||||
<style>
|
||||
/* Only use this for page-specific customizations */
|
||||
</style>
|
||||
{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<!-- Page content -->
|
||||
{% endblock %}
|
||||
```
|
||||
|
||||
## CSS Loading Order
|
||||
|
||||
1. `base.css` - Global styles, header, buttons, theme
|
||||
2. `style.css` - Legacy styles (temporary, for backward compatibility)
|
||||
3. Module-specific CSS (e.g., `warehouse.css`)
|
||||
4. Inline `<style>` blocks for page-specific overrides
|
||||
|
||||
## Benefits of This Structure
|
||||
|
||||
### 1. **Maintainability**
|
||||
- Easy to find and edit module-specific styles
|
||||
- Reduced conflicts between different modules
|
||||
- Clear separation of concerns
|
||||
|
||||
### 2. **Performance**
|
||||
- Only load CSS needed for specific pages
|
||||
- Smaller file sizes per page
|
||||
- Better caching (module CSS rarely changes)
|
||||
|
||||
### 3. **Team Development**
|
||||
- Different developers can work on different modules
|
||||
- Less merge conflicts in CSS files
|
||||
- Clear ownership of styles
|
||||
|
||||
### 4. **Scalability**
|
||||
- Easy to add new modules
|
||||
- Simple to deprecate old styles
|
||||
- Clear migration path
|
||||
|
||||
## Migration Checklist
|
||||
|
||||
### For Each Template:
|
||||
- [ ] Identify module/page type
|
||||
- [ ] Extract relevant styles to module CSS file
|
||||
- [ ] Update template to include module CSS
|
||||
- [ ] Test styling works correctly
|
||||
- [ ] Remove old styles from style.css
|
||||
|
||||
### Current Status:
|
||||
- [x] Login page - Fully migrated
|
||||
- [x] Warehouse module - Partially migrated (create_locations.html updated)
|
||||
- [ ] Dashboard - CSS created, templates need updating
|
||||
- [ ] Etiquette module - Needs CSS extraction
|
||||
- [ ] Quality module - Needs CSS extraction
|
||||
- [ ] Scan module - Needs CSS extraction
|
||||
|
||||
## Example: Migrating a Template
|
||||
|
||||
### Before:
|
||||
```html
|
||||
{% block head %}
|
||||
<style>
|
||||
.my-module-specific-class {
|
||||
/* styles here */
|
||||
}
|
||||
</style>
|
||||
{% endblock %}
|
||||
```
|
||||
|
||||
### After:
|
||||
1. Move styles to `css/module-name.css`
|
||||
2. Update template:
|
||||
```html
|
||||
{% block head %}
|
||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/module-name.css') }}">
|
||||
{% endblock %}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use semantic naming:** `warehouse.css`, `login.css`, not `page1.css`
|
||||
2. **Keep base.css minimal:** Only truly global styles
|
||||
3. **Avoid deep nesting:** Keep CSS selectors simple
|
||||
4. **Use consistent naming:** Follow existing patterns
|
||||
5. **Document changes:** Update this guide when adding new modules
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Extract etiquette module styles to `etichete.css`
|
||||
2. Update all etiquette templates to use new CSS
|
||||
3. Extract quality module styles to `quality.css`
|
||||
4. Extract scan module styles to `scan.css`
|
||||
5. Gradually remove migrated styles from `style.css`
|
||||
6. Eventually remove `style.css` dependency from `base.html`
|
||||
@@ -1,133 +0,0 @@
|
||||
# Quick Database Setup for Trasabilitate Application
|
||||
|
||||
This script provides a complete one-step database setup for quick deployment of the Trasabilitate application.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before running the setup script, ensure:
|
||||
|
||||
1. **MariaDB is installed and running**
|
||||
2. **Database and user are created**:
|
||||
```sql
|
||||
CREATE DATABASE trasabilitate;
|
||||
CREATE USER 'trasabilitate'@'localhost' IDENTIFIED BY 'Initial01!';
|
||||
GRANT ALL PRIVILEGES ON trasabilitate.* TO 'trasabilitate'@'localhost';
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
3. **Python virtual environment is activated**:
|
||||
```bash
|
||||
source ../recticel/bin/activate
|
||||
```
|
||||
4. **Python dependencies are installed**:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Quick Setup (Recommended)
|
||||
```bash
|
||||
cd /srv/quality_recticel/py_app
|
||||
source ../recticel/bin/activate
|
||||
python3 app/db_create_scripts/setup_complete_database.py
|
||||
```
|
||||
|
||||
### What the script creates:
|
||||
|
||||
#### MariaDB Tables:
|
||||
- `scan1_orders` - Quality scanning data for process 1
|
||||
- `scanfg_orders` - Quality scanning data for finished goods
|
||||
- `order_for_labels` - Label printing orders
|
||||
- `warehouse_locations` - Warehouse location management
|
||||
- `permissions` - System permissions
|
||||
- `role_permissions` - Role-permission mappings
|
||||
- `role_hierarchy` - User role hierarchy
|
||||
- `permission_audit_log` - Permission change audit trail
|
||||
|
||||
#### Database Triggers:
|
||||
- Auto-increment approved/rejected quantities based on quality codes
|
||||
- Triggers for both scan1_orders and scanfg_orders tables
|
||||
|
||||
#### SQLite Tables:
|
||||
- `users` - User authentication (in instance/users.db)
|
||||
- `roles` - User roles (in instance/users.db)
|
||||
|
||||
#### Configuration:
|
||||
- Updates `instance/external_server.conf` with correct database settings
|
||||
- Creates default superadmin user (username: `superadmin`, password: `superadmin123`)
|
||||
|
||||
#### Permission System:
|
||||
- 7 user roles (superadmin, admin, manager, quality_manager, warehouse_manager, quality_worker, warehouse_worker)
|
||||
- 25+ granular permissions for different application areas
|
||||
- Complete role hierarchy with inheritance
|
||||
|
||||
## After Setup
|
||||
|
||||
1. **Start the application**:
|
||||
```bash
|
||||
python3 run.py
|
||||
```
|
||||
|
||||
2. **Access the application**:
|
||||
- Local: http://127.0.0.1:8781
|
||||
- Network: http://192.168.0.205:8781
|
||||
|
||||
3. **Login with superadmin**:
|
||||
- Username: `superadmin`
|
||||
- Password: `superadmin123`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues:
|
||||
|
||||
1. **Database connection failed**:
|
||||
- Check if MariaDB is running: `sudo systemctl status mariadb`
|
||||
- Verify database exists: `sudo mysql -e "SHOW DATABASES;"`
|
||||
- Check user privileges: `sudo mysql -e "SHOW GRANTS FOR 'trasabilitate'@'localhost';"`
|
||||
|
||||
2. **Import errors**:
|
||||
- Ensure virtual environment is activated
|
||||
- Install missing dependencies: `pip install -r requirements.txt`
|
||||
|
||||
3. **Permission denied**:
|
||||
- Make script executable: `chmod +x app/db_create_scripts/setup_complete_database.py`
|
||||
- Check file ownership: `ls -la app/db_create_scripts/`
|
||||
|
||||
### Manual Database Recreation:
|
||||
|
||||
If you need to completely reset the database:
|
||||
|
||||
```bash
|
||||
# Drop and recreate database
|
||||
sudo mysql -e "DROP DATABASE IF EXISTS trasabilitate; CREATE DATABASE trasabilitate; GRANT ALL PRIVILEGES ON trasabilitate.* TO 'trasabilitate'@'localhost'; FLUSH PRIVILEGES;"
|
||||
|
||||
# Remove SQLite database
|
||||
rm -f instance/users.db
|
||||
|
||||
# Run setup script
|
||||
python3 app/db_create_scripts/setup_complete_database.py
|
||||
```
|
||||
|
||||
## Script Features
|
||||
|
||||
- ✅ **Comprehensive**: Creates all necessary database structure
|
||||
- ✅ **Safe**: Uses `IF NOT EXISTS` clauses to prevent conflicts
|
||||
- ✅ **Verified**: Includes verification step to confirm setup
|
||||
- ✅ **Informative**: Detailed output showing each step
|
||||
- ✅ **Error handling**: Clear error messages and troubleshooting hints
|
||||
- ✅ **Idempotent**: Can be run multiple times safely
|
||||
|
||||
## Development Notes
|
||||
|
||||
The script combines functionality from these individual scripts:
|
||||
- `create_scan_1db.py`
|
||||
- `create_scanfg_orders.py`
|
||||
- `create_order_for_labels_table.py`
|
||||
- `create_warehouse_locations_table.py`
|
||||
- `create_permissions_tables.py`
|
||||
- `create_roles_table.py`
|
||||
- `create_triggers.py`
|
||||
- `create_triggers_fg.py`
|
||||
- `populate_permissions.py`
|
||||
|
||||
For development or debugging, you can still run individual scripts if needed.
|
||||
@@ -1,319 +0,0 @@
|
||||
# Recticel Quality Application - Docker Deployment Guide
|
||||
|
||||
## 📋 Overview
|
||||
|
||||
This is a complete Docker-based deployment solution for the Recticel Quality Application. It includes:
|
||||
- **Flask Web Application** (Python 3.10)
|
||||
- **MariaDB 11.3 Database** with automatic initialization
|
||||
- **Gunicorn WSGI Server** for production-ready performance
|
||||
- **Automatic database schema setup** using existing setup scripts
|
||||
- **Superadmin user seeding** for immediate access
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Prerequisites
|
||||
- Docker Engine 20.10+
|
||||
- Docker Compose 2.0+
|
||||
- At least 2GB free disk space
|
||||
- Ports 8781 and 3306 available (or customize in .env)
|
||||
|
||||
### 1. Clone and Prepare
|
||||
|
||||
```bash
|
||||
cd /srv/quality_recticel
|
||||
```
|
||||
|
||||
### 2. Configure Environment (Optional)
|
||||
|
||||
Create a `.env` file from the example:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
Edit `.env` to customize settings:
|
||||
```env
|
||||
MYSQL_ROOT_PASSWORD=your_secure_root_password
|
||||
DB_PORT=3306
|
||||
APP_PORT=8781
|
||||
INIT_DB=true
|
||||
SEED_DB=true
|
||||
```
|
||||
|
||||
### 3. Build and Deploy
|
||||
|
||||
Start all services:
|
||||
|
||||
```bash
|
||||
docker-compose up -d --build
|
||||
```
|
||||
|
||||
This will:
|
||||
1. ✅ Build the Flask application Docker image
|
||||
2. ✅ Pull MariaDB 11.3 image
|
||||
3. ✅ Create and initialize the database
|
||||
4. ✅ Run all database schema creation scripts
|
||||
5. ✅ Seed the superadmin user
|
||||
6. ✅ Start the web application on port 8781
|
||||
|
||||
### 4. Verify Deployment
|
||||
|
||||
Check service status:
|
||||
```bash
|
||||
docker-compose ps
|
||||
```
|
||||
|
||||
View logs:
|
||||
```bash
|
||||
# All services
|
||||
docker-compose logs -f
|
||||
|
||||
# Just the web app
|
||||
docker-compose logs -f web
|
||||
|
||||
# Just the database
|
||||
docker-compose logs -f db
|
||||
```
|
||||
|
||||
### 5. Access the Application
|
||||
|
||||
Open your browser and navigate to:
|
||||
```
|
||||
http://localhost:8781
|
||||
```
|
||||
|
||||
**Default Login:**
|
||||
- Username: `superadmin`
|
||||
- Password: `superadmin123`
|
||||
|
||||
## 🔧 Management Commands
|
||||
|
||||
### Start Services
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Stop Services
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
### Stop and Remove All Data (including database)
|
||||
```bash
|
||||
docker-compose down -v
|
||||
```
|
||||
|
||||
### Restart Services
|
||||
```bash
|
||||
docker-compose restart
|
||||
```
|
||||
|
||||
### View Real-time Logs
|
||||
```bash
|
||||
docker-compose logs -f
|
||||
```
|
||||
|
||||
### Rebuild After Code Changes
|
||||
```bash
|
||||
docker-compose up -d --build
|
||||
```
|
||||
|
||||
### Access Database Console
|
||||
```bash
|
||||
docker-compose exec db mariadb -u trasabilitate -p trasabilitate
|
||||
# Password: Initial01!
|
||||
```
|
||||
|
||||
### Execute Commands in App Container
|
||||
```bash
|
||||
docker-compose exec web bash
|
||||
```
|
||||
|
||||
## 📁 Data Persistence
|
||||
|
||||
The following data is persisted across container restarts:
|
||||
|
||||
- **Database Data:** Stored in Docker volume `mariadb_data`
|
||||
- **Application Logs:** Mapped to `./logs` directory
|
||||
- **Instance Config:** Mapped to `./instance` directory
|
||||
|
||||
## 🔐 Security Considerations
|
||||
|
||||
### Production Deployment Checklist:
|
||||
|
||||
1. **Change Default Passwords:**
|
||||
- Update `MYSQL_ROOT_PASSWORD` in `.env`
|
||||
- Update database password in `docker-compose.yml`
|
||||
- Change superadmin password after first login
|
||||
|
||||
2. **Use Environment Variables:**
|
||||
- Never commit `.env` file to version control
|
||||
- Use secrets management for production
|
||||
|
||||
3. **Network Security:**
|
||||
- If database access from host is not needed, remove the port mapping:
|
||||
```yaml
|
||||
# Comment out in docker-compose.yml:
|
||||
# ports:
|
||||
# - "3306:3306"
|
||||
```
|
||||
|
||||
4. **SSL/TLS:**
|
||||
- Configure reverse proxy (nginx/traefik) for HTTPS
|
||||
- Update gunicorn SSL configuration if needed
|
||||
|
||||
5. **Firewall:**
|
||||
- Only expose necessary ports
|
||||
- Use firewall rules to restrict access
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Database Connection Issues
|
||||
|
||||
If the app can't connect to the database:
|
||||
|
||||
```bash
|
||||
# Check database health
|
||||
docker-compose exec db healthcheck.sh --connect
|
||||
|
||||
# Check database logs
|
||||
docker-compose logs db
|
||||
|
||||
# Verify database is accessible
|
||||
docker-compose exec db mariadb -u trasabilitate -p -e "SHOW DATABASES;"
|
||||
```
|
||||
|
||||
### Application Not Starting
|
||||
|
||||
```bash
|
||||
# Check application logs
|
||||
docker-compose logs web
|
||||
|
||||
# Verify database initialization
|
||||
docker-compose exec web python3 -c "import mariadb; print('MariaDB module OK')"
|
||||
|
||||
# Restart with fresh initialization
|
||||
docker-compose down
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Port Already in Use
|
||||
|
||||
If port 8781 or 3306 is already in use, edit `.env`:
|
||||
|
||||
```env
|
||||
APP_PORT=8782
|
||||
DB_PORT=3307
|
||||
```
|
||||
|
||||
Then restart:
|
||||
```bash
|
||||
docker-compose down
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Reset Everything
|
||||
|
||||
To start completely fresh:
|
||||
|
||||
```bash
|
||||
# Stop and remove all containers, networks, and volumes
|
||||
docker-compose down -v
|
||||
|
||||
# Remove any local data
|
||||
rm -rf logs/* instance/external_server.conf
|
||||
|
||||
# Start fresh
|
||||
docker-compose up -d --build
|
||||
```
|
||||
|
||||
## 🔄 Updating the Application
|
||||
|
||||
### Update Application Code
|
||||
|
||||
1. Make your code changes
|
||||
2. Rebuild and restart:
|
||||
```bash
|
||||
docker-compose up -d --build web
|
||||
```
|
||||
|
||||
### Update Database Schema
|
||||
|
||||
If you need to run migrations or schema updates:
|
||||
|
||||
```bash
|
||||
docker-compose exec web python3 /app/app/db_create_scripts/setup_complete_database.py
|
||||
```
|
||||
|
||||
## 📊 Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
Both services have health checks configured:
|
||||
|
||||
```bash
|
||||
# Check overall status
|
||||
docker-compose ps
|
||||
|
||||
# Detailed health status
|
||||
docker inspect recticel-app | grep -A 10 Health
|
||||
docker inspect recticel-db | grep -A 10 Health
|
||||
```
|
||||
|
||||
### Resource Usage
|
||||
|
||||
```bash
|
||||
# View resource consumption
|
||||
docker stats recticel-app recticel-db
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ Docker Compose Network │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌─────────────┐ │
|
||||
│ │ MariaDB │ │ Flask App │ │
|
||||
│ │ Container │◄─┤ Container │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ Port: 3306 │ │ Port: 8781 │ │
|
||||
│ └──────┬───────┘ └──────┬──────┘ │
|
||||
│ │ │ │
|
||||
└─────────┼─────────────────┼─────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
[Volume: [Logs &
|
||||
mariadb_data] Instance]
|
||||
```
|
||||
|
||||
## 📝 Environment Variables
|
||||
|
||||
### Database Configuration
|
||||
- `MYSQL_ROOT_PASSWORD`: MariaDB root password
|
||||
- `DB_HOST`: Database hostname (default: `db`)
|
||||
- `DB_PORT`: Database port (default: `3306`)
|
||||
- `DB_NAME`: Database name (default: `trasabilitate`)
|
||||
- `DB_USER`: Database user (default: `trasabilitate`)
|
||||
- `DB_PASSWORD`: Database password (default: `Initial01!`)
|
||||
|
||||
### Application Configuration
|
||||
- `FLASK_ENV`: Flask environment (default: `production`)
|
||||
- `FLASK_APP`: Flask app entry point (default: `run.py`)
|
||||
- `APP_PORT`: Application port (default: `8781`)
|
||||
|
||||
### Initialization Flags
|
||||
- `INIT_DB`: Run database initialization (default: `true`)
|
||||
- `SEED_DB`: Seed superadmin user (default: `true`)
|
||||
|
||||
## 🆘 Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check the logs: `docker-compose logs -f`
|
||||
2. Verify environment configuration
|
||||
3. Ensure all prerequisites are met
|
||||
4. Review this documentation
|
||||
|
||||
## 📄 License
|
||||
|
||||
[Your License Here]
|
||||
@@ -1,303 +0,0 @@
|
||||
# Quality Application - Docker Deployment Guide
|
||||
|
||||
## 📋 Overview
|
||||
|
||||
This application is containerized with Docker and docker-compose, providing:
|
||||
- **MariaDB 11.3** database with persistent storage
|
||||
- **Flask** web application with Gunicorn
|
||||
- **Mapped volumes** for easy access to code, data, and backups
|
||||
|
||||
## 🗂️ Volume Structure
|
||||
|
||||
```
|
||||
quality_app/
|
||||
├── data/
|
||||
│ └── mariadb/ # Database files (MariaDB data directory)
|
||||
├── config/
|
||||
│ └── instance/ # Application configuration (external_server.conf)
|
||||
├── logs/ # Application and Gunicorn logs
|
||||
├── backups/ # Database backup files (shared with DB container)
|
||||
└── py_app/ # Application source code (optional mapping)
|
||||
```
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Setup Volumes
|
||||
|
||||
```bash
|
||||
# Create necessary directories
|
||||
bash setup-volumes.sh
|
||||
```
|
||||
|
||||
### 2. Configure Environment
|
||||
|
||||
```bash
|
||||
# Create .env file from example
|
||||
cp .env.example .env
|
||||
|
||||
# Edit configuration (IMPORTANT: Change passwords!)
|
||||
nano .env
|
||||
```
|
||||
|
||||
**Critical settings to change:**
|
||||
- `MYSQL_ROOT_PASSWORD` - Database root password
|
||||
- `DB_PASSWORD` - Application database password
|
||||
- `SECRET_KEY` - Flask secret key (generate random string)
|
||||
|
||||
**First deployment settings:**
|
||||
- `INIT_DB=true` - Initialize database schema
|
||||
- `SEED_DB=true` - Seed with default data
|
||||
|
||||
**After first deployment:**
|
||||
- `INIT_DB=false`
|
||||
- `SEED_DB=false`
|
||||
|
||||
### 3. Deploy Application
|
||||
|
||||
**Option A: Automated deployment**
|
||||
```bash
|
||||
bash quick-deploy.sh
|
||||
```
|
||||
|
||||
**Option B: Manual deployment**
|
||||
```bash
|
||||
# Build images
|
||||
docker-compose build
|
||||
|
||||
# Start services
|
||||
docker-compose up -d
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f
|
||||
```
|
||||
|
||||
## 📦 Application Dependencies
|
||||
|
||||
### Python Packages (from requirements.txt):
|
||||
- Flask - Web framework
|
||||
- Flask-SSLify - SSL support
|
||||
- Werkzeug - WSGI utilities
|
||||
- gunicorn - Production WSGI server
|
||||
- pyodbc - ODBC database connectivity
|
||||
- mariadb - MariaDB connector
|
||||
- reportlab - PDF generation
|
||||
- requests - HTTP library
|
||||
- pandas - Data manipulation
|
||||
- openpyxl - Excel file support
|
||||
- APScheduler - Job scheduling for automated backups
|
||||
|
||||
### System Dependencies (handled in Dockerfile):
|
||||
- Python 3.10
|
||||
- MariaDB client libraries
|
||||
- curl (for health checks)
|
||||
|
||||
## 🐳 Docker Images
|
||||
|
||||
### Web Application
|
||||
- **Base**: python:3.10-slim
|
||||
- **Multi-stage build** for minimal image size
|
||||
- **Non-root user** for security
|
||||
- **Health checks** enabled
|
||||
|
||||
### Database
|
||||
- **Image**: mariadb:11.3
|
||||
- **Persistent storage** with volume mapping
|
||||
- **Performance tuning** via environment variables
|
||||
|
||||
## 📊 Resource Limits
|
||||
|
||||
### Database Container
|
||||
- CPU: 2.0 cores (limit), 0.5 cores (reserved)
|
||||
- Memory: 2GB (limit), 512MB (reserved)
|
||||
- Buffer pool: 512MB
|
||||
|
||||
### Web Container
|
||||
- CPU: 2.0 cores (limit), 0.5 cores (reserved)
|
||||
- Memory: 2GB (limit), 512MB (reserved)
|
||||
- Workers: 5 Gunicorn workers
|
||||
|
||||
## 🔧 Common Operations
|
||||
|
||||
### View Logs
|
||||
```bash
|
||||
# Application logs
|
||||
docker-compose logs -f web
|
||||
|
||||
# Database logs
|
||||
docker-compose logs -f db
|
||||
|
||||
# All logs
|
||||
docker-compose logs -f
|
||||
```
|
||||
|
||||
### Restart Services
|
||||
```bash
|
||||
# Restart all
|
||||
docker-compose restart
|
||||
|
||||
# Restart specific service
|
||||
docker-compose restart web
|
||||
docker-compose restart db
|
||||
```
|
||||
|
||||
### Stop Services
|
||||
```bash
|
||||
# Stop (keeps data)
|
||||
docker-compose down
|
||||
|
||||
# Stop and remove volumes (WARNING: deletes database!)
|
||||
docker-compose down -v
|
||||
```
|
||||
|
||||
### Update Application Code
|
||||
|
||||
**Without rebuilding (development mode):**
|
||||
1. Uncomment volume mapping in docker-compose.yml:
|
||||
```yaml
|
||||
- ${APP_CODE_PATH}:/app:ro
|
||||
```
|
||||
2. Edit code in `./py_app/`
|
||||
3. Restart: `docker-compose restart web`
|
||||
|
||||
**With rebuilding (production mode):**
|
||||
```bash
|
||||
docker-compose build --no-cache web
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Database Access
|
||||
|
||||
**MySQL shell inside container:**
|
||||
```bash
|
||||
docker-compose exec db mysql -u trasabilitate -p
|
||||
# Enter password: Initial01! (or your custom password)
|
||||
```
|
||||
|
||||
**From host machine:**
|
||||
```bash
|
||||
mysql -h 127.0.0.1 -P 3306 -u trasabilitate -p
|
||||
```
|
||||
|
||||
**Root access:**
|
||||
```bash
|
||||
docker-compose exec db mysql -u root -p
|
||||
```
|
||||
|
||||
## 💾 Backup Operations
|
||||
|
||||
### Manual Backup
|
||||
```bash
|
||||
# Full backup
|
||||
docker-compose exec db mysqldump -u trasabilitate -pInitial01! trasabilitate > backups/manual_$(date +%Y%m%d_%H%M%S).sql
|
||||
|
||||
# Data-only backup
|
||||
docker-compose exec db mysqldump -u trasabilitate -pInitial01! --no-create-info trasabilitate > backups/data_only_$(date +%Y%m%d_%H%M%S).sql
|
||||
|
||||
# Structure-only backup
|
||||
docker-compose exec db mysqldump -u trasabilitate -pInitial01! --no-data trasabilitate > backups/structure_only_$(date +%Y%m%d_%H%M%S).sql
|
||||
```
|
||||
|
||||
### Automated Backups
|
||||
The application includes a built-in scheduler for automated backups. Configure via the web interface.
|
||||
|
||||
### Restore from Backup
|
||||
```bash
|
||||
# Stop application (keeps database running)
|
||||
docker-compose stop web
|
||||
|
||||
# Restore database
|
||||
docker-compose exec -T db mysql -u trasabilitate -pInitial01! trasabilitate < backups/backup_file.sql
|
||||
|
||||
# Start application
|
||||
docker-compose start web
|
||||
```
|
||||
|
||||
## 🔍 Troubleshooting
|
||||
|
||||
### Container won't start
|
||||
```bash
|
||||
# Check logs
|
||||
docker-compose logs db
|
||||
docker-compose logs web
|
||||
|
||||
# Check if ports are available
|
||||
ss -tulpn | grep 8781
|
||||
ss -tulpn | grep 3306
|
||||
```
|
||||
|
||||
### Database connection failed
|
||||
```bash
|
||||
# Check database is healthy
|
||||
docker-compose ps
|
||||
|
||||
# Test database connection
|
||||
docker-compose exec db mysqladmin ping -u root -p
|
||||
|
||||
# Check database users
|
||||
docker-compose exec db mysql -u root -p -e "SELECT User, Host FROM mysql.user;"
|
||||
```
|
||||
|
||||
### Permission issues
|
||||
```bash
|
||||
# Check directory permissions
|
||||
ls -la data/mariadb
|
||||
ls -la logs
|
||||
ls -la backups
|
||||
|
||||
# Fix permissions if needed
|
||||
chmod -R 755 data logs backups config
|
||||
```
|
||||
|
||||
### Reset everything (WARNING: deletes all data!)
|
||||
```bash
|
||||
# Stop and remove containers, volumes
|
||||
docker-compose down -v
|
||||
|
||||
# Remove volume directories
|
||||
rm -rf data/mariadb/* logs/* config/instance/*
|
||||
|
||||
# Start fresh
|
||||
bash quick-deploy.sh
|
||||
```
|
||||
|
||||
## 🔒 Security Notes
|
||||
|
||||
1. **Change default passwords** in .env file
|
||||
2. **Generate new SECRET_KEY** for Flask
|
||||
3. Never commit .env file to version control
|
||||
4. Use firewall rules to restrict database port (3306) access
|
||||
5. Consider using Docker secrets for sensitive data in production
|
||||
6. Regular security updates: `docker-compose pull && docker-compose up -d`
|
||||
|
||||
## 🌐 Port Mapping
|
||||
|
||||
- **8781** - Web application (configurable via APP_PORT in .env)
|
||||
- **3306** - MariaDB database (configurable via DB_PORT in .env)
|
||||
|
||||
## 📁 Configuration Files
|
||||
|
||||
- **docker-compose.yml** - Service orchestration
|
||||
- **.env** - Environment variables and configuration
|
||||
- **Dockerfile** - Web application image definition
|
||||
- **docker-entrypoint.sh** - Container initialization script
|
||||
- **init-db.sql** - Database initialization script
|
||||
|
||||
## 🎯 Production Checklist
|
||||
|
||||
- [ ] Change all default passwords
|
||||
- [ ] Generate secure SECRET_KEY
|
||||
- [ ] Set FLASK_ENV=production
|
||||
- [ ] Configure resource limits appropriately
|
||||
- [ ] Set up backup schedule
|
||||
- [ ] Configure firewall rules
|
||||
- [ ] Set up monitoring and logging
|
||||
- [ ] Test backup/restore procedures
|
||||
- [ ] Document deployment procedure for your team
|
||||
- [ ] Set INIT_DB=false and SEED_DB=false after first deployment
|
||||
|
||||
## 📞 Support
|
||||
|
||||
For issues or questions, refer to:
|
||||
- Documentation in `documentation/` folder
|
||||
- Docker logs: `docker-compose logs -f`
|
||||
- Application logs: `./logs/` directory
|
||||
@@ -1,346 +0,0 @@
|
||||
# Recticel Quality Application - Docker Solution Summary
|
||||
|
||||
## 📦 What Has Been Created
|
||||
|
||||
A complete, production-ready Docker deployment solution for your Recticel Quality Application with the following components:
|
||||
|
||||
### Core Files Created
|
||||
|
||||
1. **`Dockerfile`** - Multi-stage Flask application container
|
||||
- Based on Python 3.10-slim
|
||||
- Installs all dependencies from requirements.txt
|
||||
- Configures Gunicorn WSGI server
|
||||
- Exposes port 8781
|
||||
|
||||
2. **`docker-compose.yml`** - Complete orchestration configuration
|
||||
- MariaDB 11.3 database service
|
||||
- Flask web application service
|
||||
- Automatic networking between services
|
||||
- Health checks for both services
|
||||
- Volume persistence for database and logs
|
||||
|
||||
3. **`docker-entrypoint.sh`** - Smart initialization script
|
||||
- Waits for database to be ready
|
||||
- Creates database configuration file
|
||||
- Runs database schema initialization
|
||||
- Seeds superadmin user
|
||||
- Starts the application
|
||||
|
||||
4. **`init-db.sql`** - MariaDB initialization
|
||||
- Creates database and user
|
||||
- Sets up permissions automatically
|
||||
|
||||
5. **`.env.example`** - Configuration template
|
||||
- Database passwords
|
||||
- Port configurations
|
||||
- Initialization flags
|
||||
|
||||
6. **`.dockerignore`** - Build optimization
|
||||
- Excludes unnecessary files from Docker image
|
||||
- Reduces image size
|
||||
|
||||
7. **`deploy.sh`** - One-command deployment script
|
||||
- Checks prerequisites
|
||||
- Creates configuration
|
||||
- Builds and starts services
|
||||
- Shows deployment status
|
||||
|
||||
8. **`Makefile`** - Convenient management commands
|
||||
- `make install` - First-time installation
|
||||
- `make up` - Start services
|
||||
- `make down` - Stop services
|
||||
- `make logs` - View logs
|
||||
- `make shell` - Access container
|
||||
- `make backup-db` - Backup database
|
||||
- And many more...
|
||||
|
||||
9. **`DOCKER_DEPLOYMENT.md`** - Complete documentation
|
||||
- Quick start guide
|
||||
- Management commands
|
||||
- Troubleshooting
|
||||
- Security considerations
|
||||
- Architecture diagrams
|
||||
|
||||
### Enhanced Files
|
||||
|
||||
10. **`setup_complete_database.py`** - Updated to support Docker
|
||||
- Now reads from environment variables
|
||||
- Fallback to config file for non-Docker deployments
|
||||
- Maintains backward compatibility
|
||||
|
||||
## 🎯 Key Features
|
||||
|
||||
### 1. Single-Command Deployment
|
||||
```bash
|
||||
./deploy.sh
|
||||
```
|
||||
This single command will:
|
||||
- ✅ Build Docker images
|
||||
- ✅ Create MariaDB database
|
||||
- ✅ Initialize all database tables and triggers
|
||||
- ✅ Seed superadmin user
|
||||
- ✅ Start the application
|
||||
|
||||
### 2. Complete Isolation
|
||||
- Application runs in its own container
|
||||
- Database runs in its own container
|
||||
- No system dependencies needed except Docker
|
||||
- No Python/MariaDB installation on host required
|
||||
|
||||
### 3. Data Persistence
|
||||
- Database data persists across restarts (Docker volume)
|
||||
- Application logs accessible on host
|
||||
- Configuration preserved
|
||||
|
||||
### 4. Production Ready
|
||||
- Gunicorn WSGI server (not Flask dev server)
|
||||
- Health checks for monitoring
|
||||
- Automatic restart on failure
|
||||
- Proper logging configuration
|
||||
- Resource isolation
|
||||
|
||||
### 5. Easy Management
|
||||
```bash
|
||||
# Start
|
||||
docker compose up -d
|
||||
|
||||
# Stop
|
||||
docker compose down
|
||||
|
||||
# View logs
|
||||
docker compose logs -f
|
||||
|
||||
# Backup database
|
||||
make backup-db
|
||||
|
||||
# Restore database
|
||||
make restore-db BACKUP=backup_20231215.sql
|
||||
|
||||
# Access shell
|
||||
make shell
|
||||
|
||||
# Complete reset
|
||||
make reset
|
||||
```
|
||||
|
||||
## 🚀 Deployment Options
|
||||
|
||||
### Option 1: Quick Deploy (Recommended for Testing)
|
||||
```bash
|
||||
cd /srv/quality_recticel
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
### Option 2: Using Makefile (Recommended for Management)
|
||||
```bash
|
||||
cd /srv/quality_recticel
|
||||
make install # First time only
|
||||
make up # Start services
|
||||
make logs # Monitor
|
||||
```
|
||||
|
||||
### Option 3: Using Docker Compose Directly
|
||||
```bash
|
||||
cd /srv/quality_recticel
|
||||
cp .env.example .env
|
||||
docker compose up -d --build
|
||||
```
|
||||
|
||||
## 📋 Prerequisites
|
||||
|
||||
The deployment **requires** Docker to be installed on the target system:
|
||||
|
||||
### Installing Docker on Ubuntu/Debian:
|
||||
```bash
|
||||
# Update package index
|
||||
sudo apt-get update
|
||||
|
||||
# Install dependencies
|
||||
sudo apt-get install -y ca-certificates curl gnupg
|
||||
|
||||
# Add Docker's official GPG key
|
||||
sudo install -m 0755 -d /etc/apt/keyrings
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||
sudo chmod a+r /etc/apt/keyrings/docker.gpg
|
||||
|
||||
# Set up the repository
|
||||
echo \
|
||||
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
|
||||
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
|
||||
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||
|
||||
# Install Docker Engine
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||
|
||||
# Add current user to docker group (optional, to run without sudo)
|
||||
sudo usermod -aG docker $USER
|
||||
```
|
||||
|
||||
After installation, log out and back in for group changes to take effect.
|
||||
|
||||
### Installing Docker on CentOS/RHEL:
|
||||
```bash
|
||||
sudo yum install -y yum-utils
|
||||
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
||||
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||
sudo systemctl start docker
|
||||
sudo systemctl enable docker
|
||||
sudo usermod -aG docker $USER
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ Docker Compose Stack │
|
||||
│ │
|
||||
│ ┌────────────────────┐ ┌───────────────────┐ │
|
||||
│ │ MariaDB 11.3 │ │ Flask App │ │
|
||||
│ │ Container │◄─────┤ Container │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ - Port: 3306 │ │ - Port: 8781 │ │
|
||||
│ │ - Volume: DB Data │ │ - Gunicorn WSGI │ │
|
||||
│ │ - Auto Init │ │ - Python 3.10 │ │
|
||||
│ │ - Health Checks │ │ - Health Checks │ │
|
||||
│ └──────────┬─────────┘ └─────────┬─────────┘ │
|
||||
│ │ │ │
|
||||
└─────────────┼──────────────────────────┼─────────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
[mariadb_data] [logs directory]
|
||||
Docker Volume Host filesystem
|
||||
```
|
||||
|
||||
## 🔐 Security Features
|
||||
|
||||
1. **Database Isolation**: Database not exposed to host by default (can be configured)
|
||||
2. **Password Management**: All passwords in `.env` file (not committed to git)
|
||||
3. **User Permissions**: Proper MariaDB user with limited privileges
|
||||
4. **Network Isolation**: Services communicate on private Docker network
|
||||
5. **Production Mode**: Flask runs in production mode with Gunicorn
|
||||
|
||||
## 📊 What Gets Deployed
|
||||
|
||||
### Database Schema
|
||||
All tables from `setup_complete_database.py`:
|
||||
- `scan1_orders` - First scan orders
|
||||
- `scanfg_orders` - Final goods scan orders
|
||||
- `order_for_labels` - Label orders
|
||||
- `warehouse_locations` - Warehouse locations
|
||||
- `permissions` - Permission system
|
||||
- `role_permissions` - Role-based access
|
||||
- `role_hierarchy` - Role hierarchy
|
||||
- `permission_audit_log` - Audit logging
|
||||
- Plus SQLAlchemy tables: `users`, `roles`
|
||||
|
||||
### Initial Data
|
||||
- Superadmin user: `superadmin` / `superadmin123`
|
||||
|
||||
### Application Features
|
||||
- Complete Flask web application
|
||||
- Gunicorn WSGI server (4-8 workers depending on CPU)
|
||||
- Static file serving
|
||||
- Session management
|
||||
- Database connection pooling
|
||||
|
||||
## 🔄 Migration from Existing Deployment
|
||||
|
||||
If you have an existing non-Docker deployment:
|
||||
|
||||
### 1. Backup Current Data
|
||||
```bash
|
||||
# Backup database
|
||||
mysqldump -u trasabilitate -p trasabilitate > backup.sql
|
||||
|
||||
# Backup any uploaded files or custom data
|
||||
cp -r py_app/instance backup_instance/
|
||||
```
|
||||
|
||||
### 2. Deploy Docker Solution
|
||||
```bash
|
||||
cd /srv/quality_recticel
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
### 3. Restore Data (if needed)
|
||||
```bash
|
||||
# Restore database
|
||||
docker compose exec -T db mariadb -u trasabilitate -pInitial01! trasabilitate < backup.sql
|
||||
```
|
||||
|
||||
### 4. Stop Old Service
|
||||
```bash
|
||||
# Stop systemd service
|
||||
sudo systemctl stop trasabilitate
|
||||
sudo systemctl disable trasabilitate
|
||||
```
|
||||
|
||||
## 🎓 Learning Resources
|
||||
|
||||
- Docker Compose docs: https://docs.docker.com/compose/
|
||||
- Gunicorn configuration: https://docs.gunicorn.org/
|
||||
- MariaDB Docker: https://hub.docker.com/_/mariadb
|
||||
|
||||
## ✅ Testing Checklist
|
||||
|
||||
After deployment, verify:
|
||||
|
||||
- [ ] Services are running: `docker compose ps`
|
||||
- [ ] App is accessible: http://localhost:8781
|
||||
- [ ] Can log in with superadmin
|
||||
- [ ] Database contains tables: `make shell-db` then `SHOW TABLES;`
|
||||
- [ ] Logs are being written: `ls -la logs/`
|
||||
- [ ] Can restart services: `docker compose restart`
|
||||
- [ ] Data persists after restart
|
||||
|
||||
## 🆘 Support Commands
|
||||
|
||||
```bash
|
||||
# View all services
|
||||
docker compose ps
|
||||
|
||||
# View logs
|
||||
docker compose logs -f
|
||||
|
||||
# Restart a specific service
|
||||
docker compose restart web
|
||||
|
||||
# Access web container shell
|
||||
docker compose exec web bash
|
||||
|
||||
# Access database
|
||||
docker compose exec db mariadb -u trasabilitate -p
|
||||
|
||||
# Check resource usage
|
||||
docker stats
|
||||
|
||||
# Remove everything and start fresh
|
||||
docker compose down -v
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
## 📝 Next Steps
|
||||
|
||||
1. **Install Docker** on the target server (if not already installed)
|
||||
2. **Review and customize** `.env` file after copying from `.env.example`
|
||||
3. **Run deployment**: `./deploy.sh`
|
||||
4. **Change default passwords** after first login
|
||||
5. **Set up reverse proxy** (nginx/traefik) for HTTPS if needed
|
||||
6. **Configure backups** using `make backup-db`
|
||||
7. **Monitor logs** regularly with `make logs`
|
||||
|
||||
## 🎉 Benefits of This Solution
|
||||
|
||||
1. **Portable**: Works on any system with Docker
|
||||
2. **Reproducible**: Same deployment every time
|
||||
3. **Isolated**: No conflicts with system packages
|
||||
4. **Easy Updates**: Just rebuild and restart
|
||||
5. **Scalable**: Can easily add more services
|
||||
6. **Professional**: Production-ready configuration
|
||||
7. **Documented**: Complete documentation included
|
||||
8. **Maintainable**: Simple management commands
|
||||
|
||||
---
|
||||
|
||||
**Your Flask application is now ready for modern, containerized deployment! 🚀**
|
||||
@@ -1,146 +0,0 @@
|
||||
# Excel File Upload Mapping
|
||||
|
||||
## File Information
|
||||
- **File**: `1cc01b8Comenzi Productie (19).xlsx`
|
||||
- **Sheets**: DataSheet (corrupted), Sheet1 (249 rows × 29 columns)
|
||||
- **Purpose**: Production orders for label generation
|
||||
|
||||
## Excel Columns (29 total)
|
||||
|
||||
### Core Order Fields (✅ Stored in Database)
|
||||
1. **Comanda Productie** → `comanda_productie` ✅
|
||||
2. **Cod Articol** → `cod_articol` ✅
|
||||
3. **Descriere** → `descr_com_prod` ✅
|
||||
4. **Cantitate ceruta** → `cantitate` ✅
|
||||
5. **Delivery date** → `data_livrare` ✅
|
||||
6. **Customer** → `customer_name` ✅
|
||||
7. **Comanda client** → `com_achiz_client` ✅
|
||||
|
||||
### Additional Fields (📊 Read but not stored in order_for_labels table)
|
||||
8. **Status** → `status` 📊
|
||||
9. **End of Quilting** → `end_of_quilting` 📊
|
||||
10. **End of sewing** → `end_of_sewing` 📊
|
||||
11. **T1** → `t1` 📊 (Quality control stage 1)
|
||||
12. **Data inregistrare T1** → `data_inregistrare_t1` 📊
|
||||
13. **Numele Complet T1** → `numele_complet_t1` 📊
|
||||
14. **T2** → `t2` 📊 (Quality control stage 2)
|
||||
15. **Data inregistrare T2** → `data_inregistrare_t2` 📊
|
||||
16. **Numele Complet T2** → `numele_complet_t2` 📊
|
||||
17. **T3** → `t3` 📊 (Quality control stage 3)
|
||||
18. **Data inregistrare T3** → `data_inregistrare_t3` 📊
|
||||
19. **Numele Complet T3** → `numele_complet_t3` 📊
|
||||
20. **Clasificare** → `clasificare` 📊
|
||||
21. **Masina Cusut** → `masina_cusut` 📊
|
||||
22. **Tip Masina** → `tip_masina` 📊
|
||||
23. **Timp normat total** → `timp_normat_total` 📊
|
||||
24. **Data Deschiderii** → `data_deschiderii` 📊
|
||||
25. **Model Lb2** → `model_lb2` 📊
|
||||
26. **Data Planific.** → `data_planific` 📊
|
||||
27. **Numar masina** → `numar_masina` 📊
|
||||
28. **Design nr** → `design_nr` 📊
|
||||
29. **Needle position** → `needle_position` 📊
|
||||
|
||||
## Database Schema (order_for_labels)
|
||||
|
||||
```sql
|
||||
CREATE TABLE order_for_labels (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
comanda_productie VARCHAR(25) NOT NULL,
|
||||
cod_articol VARCHAR(25) NOT NULL,
|
||||
descr_com_prod VARCHAR(100),
|
||||
cantitate INT,
|
||||
data_livrare DATE,
|
||||
dimensiune VARCHAR(25),
|
||||
com_achiz_client VARCHAR(25),
|
||||
nr_linie_com_client INT,
|
||||
customer_name VARCHAR(50),
|
||||
customer_article_number VARCHAR(25),
|
||||
open_for_order VARCHAR(25),
|
||||
line_number INT,
|
||||
printed_labels INT DEFAULT 0
|
||||
);
|
||||
```
|
||||
|
||||
## Validation Rules
|
||||
|
||||
### Required Fields
|
||||
- ✅ `comanda_productie` (Production Order #)
|
||||
- ✅ `cod_articol` (Article Code)
|
||||
- ✅ `descr_com_prod` (Description)
|
||||
- ✅ `cantitate` (Quantity)
|
||||
|
||||
### Optional Fields
|
||||
- `data_livrare` (Delivery Date)
|
||||
- `dimensiune` (Dimension)
|
||||
- `com_achiz_client` (Customer Order #)
|
||||
- `nr_linie_com_client` (Customer Order Line)
|
||||
- `customer_name` (Customer Name)
|
||||
- `customer_article_number` (Customer Article #)
|
||||
- `open_for_order` (Open for Order)
|
||||
- `line_number` (Line Number)
|
||||
|
||||
## Processing Logic
|
||||
|
||||
1. **Sheet Selection**: Tries Sheet1 → sheet 0 → DataSheet
|
||||
2. **Column Normalization**: Converts to lowercase, strips whitespace
|
||||
3. **Column Mapping**: Maps Excel columns to database fields
|
||||
4. **Row Processing**:
|
||||
- Skips empty rows
|
||||
- Handles NaN values (converts to empty string)
|
||||
- Validates required fields
|
||||
- Returns validation errors and warnings
|
||||
5. **Data Storage**: Only valid rows with required fields are stored
|
||||
|
||||
## Sample Data (Row 1)
|
||||
|
||||
```
|
||||
comanda_productie : CP00267043
|
||||
cod_articol : PF010147
|
||||
descr_com_prod : HUSA STARLINE NEXT X7 90X210
|
||||
cantitate : 1
|
||||
data_livrare : 2024-03-12
|
||||
customer_name : 411_01RECT BED
|
||||
com_achiz_client : 379579-1
|
||||
status : Inchis
|
||||
classificare : HP3D
|
||||
masina_cusut : SPECIALA
|
||||
```
|
||||
|
||||
## Upload Functionality
|
||||
|
||||
**URL**: `/upload_data` (Labels module)
|
||||
|
||||
**Supported Formats**:
|
||||
- CSV (.csv)
|
||||
- Excel (.xlsx, .xls)
|
||||
|
||||
**Process**:
|
||||
1. User uploads file
|
||||
2. System validates file type
|
||||
3. Processes file (CSV or Excel)
|
||||
4. Shows preview with validation
|
||||
5. User confirms upload
|
||||
6. Data inserted into database
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
# Test Excel reading
|
||||
cd /srv/quality_app
|
||||
python3 << 'EOF'
|
||||
import pandas as pd
|
||||
df = pd.read_excel("1cc01b8Comenzi Productie (19).xlsx", sheet_name='Sheet1')
|
||||
print(f"✅ Read {len(df)} rows × {len(df.columns)} columns")
|
||||
print(f"Required fields present: {all(col in df.columns for col in ['Comanda Productie', 'Cod Articol', 'Descriere', 'Cantitate ceruta'])}")
|
||||
EOF
|
||||
```
|
||||
|
||||
## Implementation Files
|
||||
|
||||
- `/srv/quality_app/py_app/app/order_labels.py` - Processing functions
|
||||
- `/srv/quality_app/py_app/app/routes.py` - Upload route handler
|
||||
- `/srv/quality_app/py_app/app/templates/upload_orders.html` - Upload UI
|
||||
|
||||
---
|
||||
**Status**: ✅ All 29 columns readable and mapped correctly
|
||||
**Date**: 2024-11-26
|
||||
@@ -1,280 +0,0 @@
|
||||
# ✅ Docker Solution - Files Created
|
||||
|
||||
## 📦 Complete Docker Deployment Package
|
||||
|
||||
Your Flask application has been packaged into a complete Docker solution. Here's everything that was created:
|
||||
|
||||
### Core Docker Files
|
||||
|
||||
```
|
||||
/srv/quality_recticel/
|
||||
├── Dockerfile # Flask app container definition
|
||||
├── docker-compose.yml # Multi-container orchestration
|
||||
├── docker-entrypoint.sh # Container initialization script
|
||||
├── init-db.sql # MariaDB initialization
|
||||
├── .dockerignore # Build optimization
|
||||
└── .env.example # Configuration template
|
||||
```
|
||||
|
||||
### Deployment & Management
|
||||
|
||||
```
|
||||
├── deploy.sh # One-command deployment script
|
||||
├── Makefile # Management commands (make up, make down, etc.)
|
||||
├── README-DOCKER.md # Quick start guide
|
||||
├── DOCKER_DEPLOYMENT.md # Complete deployment documentation
|
||||
└── DOCKER_SOLUTION_SUMMARY.md # This comprehensive summary
|
||||
```
|
||||
|
||||
### Modified Files
|
||||
|
||||
```
|
||||
py_app/app/db_create_scripts/
|
||||
└── setup_complete_database.py # Updated to support Docker env vars
|
||||
```
|
||||
|
||||
## 🎯 What This Deployment Includes
|
||||
|
||||
### Services
|
||||
1. **Flask Web Application**
|
||||
- Python 3.10
|
||||
- Gunicorn WSGI server (production-ready)
|
||||
- Auto-generated database configuration
|
||||
- Health checks
|
||||
- Automatic restart on failure
|
||||
|
||||
2. **MariaDB 11.3 Database**
|
||||
- Automatic initialization
|
||||
- User and database creation
|
||||
- Data persistence (Docker volume)
|
||||
- Health checks
|
||||
|
||||
### Features
|
||||
- ✅ Single-command deployment
|
||||
- ✅ Automatic database schema setup
|
||||
- ✅ Superadmin user seeding
|
||||
- ✅ Data persistence across restarts
|
||||
- ✅ Container health monitoring
|
||||
- ✅ Log collection and management
|
||||
- ✅ Production-ready configuration
|
||||
- ✅ Easy backup and restore
|
||||
- ✅ Complete isolation from host system
|
||||
|
||||
## 🚀 How to Deploy
|
||||
|
||||
### Prerequisites
|
||||
**Install Docker first:**
|
||||
```bash
|
||||
# Ubuntu/Debian
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sudo sh get-docker.sh
|
||||
sudo usermod -aG docker $USER
|
||||
# Log out and back in
|
||||
```
|
||||
|
||||
### Deploy
|
||||
```bash
|
||||
cd /srv/quality_recticel
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
That's it! Your application will be available at http://localhost:8781
|
||||
|
||||
## 📋 Usage Examples
|
||||
|
||||
### Basic Operations
|
||||
```bash
|
||||
# Start services
|
||||
docker compose up -d
|
||||
|
||||
# View logs
|
||||
docker compose logs -f
|
||||
|
||||
# Stop services
|
||||
docker compose down
|
||||
|
||||
# Restart
|
||||
docker compose restart
|
||||
|
||||
# Check status
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Using Makefile (Recommended)
|
||||
```bash
|
||||
make install # First-time setup
|
||||
make up # Start services
|
||||
make down # Stop services
|
||||
make logs # View logs
|
||||
make logs-web # View only web logs
|
||||
make logs-db # View only database logs
|
||||
make shell # Access app container
|
||||
make shell-db # Access database console
|
||||
make backup-db # Backup database
|
||||
make status # Show service status
|
||||
make help # Show all commands
|
||||
```
|
||||
|
||||
### Advanced Operations
|
||||
```bash
|
||||
# Rebuild after code changes
|
||||
docker compose up -d --build web
|
||||
|
||||
# Access application shell
|
||||
docker compose exec web bash
|
||||
|
||||
# Run database commands
|
||||
docker compose exec db mariadb -u trasabilitate -p trasabilitate
|
||||
|
||||
# View resource usage
|
||||
docker stats recticel-app recticel-db
|
||||
|
||||
# Complete reset (removes all data!)
|
||||
docker compose down -v
|
||||
```
|
||||
|
||||
## 🗂️ Data Storage
|
||||
|
||||
### Persistent Data
|
||||
- **Database**: Stored in Docker volume `mariadb_data`
|
||||
- **Logs**: Mounted to `./logs` directory
|
||||
- **Config**: Mounted to `./instance` directory
|
||||
|
||||
### Backup Database
|
||||
```bash
|
||||
docker compose exec -T db mariadb-dump -u trasabilitate -pInitial01! trasabilitate > backup.sql
|
||||
```
|
||||
|
||||
### Restore Database
|
||||
```bash
|
||||
docker compose exec -T db mariadb -u trasabilitate -pInitial01! trasabilitate < backup.sql
|
||||
```
|
||||
|
||||
## 🔐 Default Credentials
|
||||
|
||||
### Application
|
||||
- URL: http://localhost:8781
|
||||
- Username: `superadmin`
|
||||
- Password: `superadmin123`
|
||||
- **⚠️ Change after first login!**
|
||||
|
||||
### Database
|
||||
- Host: `localhost:3306` (from host) or `db:3306` (from containers)
|
||||
- Database: `trasabilitate`
|
||||
- User: `trasabilitate`
|
||||
- Password: `Initial01!`
|
||||
- Root Password: Set in `.env` file
|
||||
|
||||
## 📊 Service Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ recticel-network (Docker) │
|
||||
│ │
|
||||
│ ┌─────────────────┐ ┌─────────────────┐ │
|
||||
│ │ recticel-db │ │ recticel-app │ │
|
||||
│ │ (MariaDB 11.3) │◄───────┤ (Flask/Python) │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ - Internal DB │ │ - Gunicorn │ │
|
||||
│ │ - Health Check │ │ - Health Check │ │
|
||||
│ │ - Auto Init │ │ - Auto Config │ │
|
||||
│ └────────┬────────┘ └────────┬────────┘ │
|
||||
│ │ │ │
|
||||
│ │ 3306 (optional) 8781 │ │
|
||||
└────────────┼──────────────────────────┼────────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
[mariadb_data] [Host: 8781]
|
||||
Docker Volume Application Access
|
||||
```
|
||||
|
||||
## 🎓 Quick Reference
|
||||
|
||||
### Environment Variables (.env)
|
||||
```env
|
||||
MYSQL_ROOT_PASSWORD=rootpassword # MariaDB root password
|
||||
DB_PORT=3306 # Database port (external)
|
||||
APP_PORT=8781 # Application port
|
||||
INIT_DB=true # Run DB initialization
|
||||
SEED_DB=true # Seed superadmin user
|
||||
```
|
||||
|
||||
### Important Ports
|
||||
- `8781`: Flask application (web interface)
|
||||
- `3306`: MariaDB database (optional external access)
|
||||
|
||||
### Log Locations
|
||||
- Application logs: `./logs/access.log` and `./logs/error.log`
|
||||
- Container logs: `docker compose logs`
|
||||
|
||||
## 🔧 Troubleshooting
|
||||
|
||||
### Can't connect to application
|
||||
```bash
|
||||
# Check if services are running
|
||||
docker compose ps
|
||||
|
||||
# Check web logs
|
||||
docker compose logs web
|
||||
|
||||
# Verify port not in use
|
||||
netstat -tuln | grep 8781
|
||||
```
|
||||
|
||||
### Database connection issues
|
||||
```bash
|
||||
# Check database health
|
||||
docker compose exec db healthcheck.sh --connect
|
||||
|
||||
# View database logs
|
||||
docker compose logs db
|
||||
|
||||
# Test database connection
|
||||
docker compose exec web python3 -c "import mariadb; print('OK')"
|
||||
```
|
||||
|
||||
### Port already in use
|
||||
Edit `.env` file:
|
||||
```env
|
||||
APP_PORT=8782 # Change to available port
|
||||
DB_PORT=3307 # Change if needed
|
||||
```
|
||||
|
||||
### Start completely fresh
|
||||
```bash
|
||||
docker compose down -v
|
||||
rm -rf logs/* instance/external_server.conf
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
## 📖 Documentation Files
|
||||
|
||||
1. **README-DOCKER.md** - Quick start guide (start here!)
|
||||
2. **DOCKER_DEPLOYMENT.md** - Complete deployment guide
|
||||
3. **DOCKER_SOLUTION_SUMMARY.md** - Comprehensive overview
|
||||
4. **FILES_CREATED.md** - This file
|
||||
|
||||
## ✨ Benefits
|
||||
|
||||
- **No System Dependencies**: Only Docker required
|
||||
- **Portable**: Deploy on any system with Docker
|
||||
- **Reproducible**: Consistent deployments every time
|
||||
- **Isolated**: No conflicts with other applications
|
||||
- **Production-Ready**: Gunicorn, health checks, proper logging
|
||||
- **Easy Management**: Simple commands, one-line deployment
|
||||
- **Persistent**: Data survives container restarts
|
||||
- **Scalable**: Easy to add more services
|
||||
|
||||
## 🎉 Success!
|
||||
|
||||
Your Recticel Quality Application is now containerized and ready for deployment!
|
||||
|
||||
**Next Steps:**
|
||||
1. Install Docker (if not already installed)
|
||||
2. Run `./deploy.sh`
|
||||
3. Access http://localhost:8781
|
||||
4. Log in with superadmin credentials
|
||||
5. Change default passwords
|
||||
6. Enjoy your containerized application!
|
||||
|
||||
For detailed instructions, see **README-DOCKER.md** or **DOCKER_DEPLOYMENT.md**.
|
||||
@@ -1,123 +0,0 @@
|
||||
# Improvements Applied to Quality App
|
||||
|
||||
## Date: November 13, 2025
|
||||
|
||||
### Overview
|
||||
All improvements from the production environment have been successfully transposed to the quality_app project.
|
||||
|
||||
## Files Updated/Copied
|
||||
|
||||
### 1. Docker Configuration
|
||||
- **Dockerfile** - Added `mariadb-client` package for backup functionality
|
||||
- **docker-compose.yml** - Updated with proper volume mappings and /data folder support
|
||||
- **.env** - Updated all paths to use absolute paths under `/srv/quality_app/`
|
||||
|
||||
### 2. Backup & Restore System
|
||||
- **database_backup.py** - Fixed backup/restore functions:
|
||||
- Changed `result_success` to `result.returncode == 0`
|
||||
- Added `--skip-ssl` flag for MariaDB connections
|
||||
- Fixed restore function error handling
|
||||
- **restore_database.sh** - Fixed SQL file parsing to handle MariaDB dump format
|
||||
|
||||
### 3. UI Improvements - Sticky Table Headers
|
||||
- **base.css** - Added sticky header CSS for all report tables
|
||||
- **scan.html** - Wrapped table in `report-table-container` div
|
||||
- **fg_scan.html** - Wrapped table in `report-table-container` div
|
||||
|
||||
### 4. Quality Code Display Enhancement
|
||||
- **fg_quality.js** - Quality code `0` displays as "OK" in green; CSV exports as "0"
|
||||
- **script.js** - Same improvements for quality module reports
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
/srv/quality_app/
|
||||
├── py_app/ # Application code (mapped to /app in container)
|
||||
├── data/
|
||||
│ └── mariadb/ # Database files
|
||||
├── config/
|
||||
│ └── instance/ # Application configuration
|
||||
├── logs/ # Application logs
|
||||
├── backups/ # Database backups
|
||||
├── docker-compose.yml
|
||||
├── Dockerfile
|
||||
├── .env
|
||||
└── restore_database.sh
|
||||
```
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
### Volume Mappings in .env:
|
||||
```
|
||||
DB_DATA_PATH=/srv/quality_app/data/mariadb
|
||||
APP_CODE_PATH=/srv/quality_app/py_app
|
||||
LOGS_PATH=/srv/quality_app/logs
|
||||
INSTANCE_PATH=/srv/quality_app/config/instance
|
||||
BACKUP_PATH=/srv/quality_app/backups
|
||||
```
|
||||
|
||||
## Features Implemented
|
||||
|
||||
### ✅ Backup System
|
||||
- Automatic scheduled backups
|
||||
- Manual backup creation
|
||||
- Data-only backups
|
||||
- Backup retention policies
|
||||
- MariaDB client tools installed
|
||||
|
||||
### ✅ Restore System
|
||||
- Python-based restore function
|
||||
- Shell script restore with proper SQL parsing
|
||||
- Handles MariaDB dump format correctly
|
||||
|
||||
### ✅ UI Enhancements
|
||||
- **Sticky Headers**: Table headers remain fixed when scrolling
|
||||
- **Quality Code Display**:
|
||||
- Shows "OK" in green for quality code 0
|
||||
- Exports "0" in CSV files
|
||||
- Better user experience
|
||||
|
||||
### ✅ Volume Mapping
|
||||
- All volumes use absolute paths
|
||||
- Support for /data folder mapping
|
||||
- Easy to configure backup location on different drives
|
||||
|
||||
## Starting the Application
|
||||
|
||||
```bash
|
||||
cd /srv/quality_app
|
||||
docker compose up -d --build
|
||||
```
|
||||
|
||||
## Testing Backup & Restore
|
||||
|
||||
### Create Backup:
|
||||
```bash
|
||||
cd /srv/quality_app
|
||||
docker compose exec web bash -c "cd /app && python3 -c 'from app import create_app; from app.database_backup import DatabaseBackupManager; app = create_app();
|
||||
with app.app_context(): bm = DatabaseBackupManager(); result = bm.create_backup(); print(result)'"
|
||||
```
|
||||
|
||||
### Restore Backup:
|
||||
```bash
|
||||
cd /srv/quality_app
|
||||
./restore_database.sh /srv/quality_app/backups/backup_file.sql
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Database initialization is set to `false` (already initialized)
|
||||
- All improvements are production-ready
|
||||
- Backup path can be changed to external drive if needed
|
||||
- Application port: 8781 (default)
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Review .env file and update passwords if needed
|
||||
2. Test all functionality after deployment
|
||||
3. Configure backup schedule if needed
|
||||
4. Set up external backup drive if desired
|
||||
|
||||
---
|
||||
**Compatibility**: All changes are backward compatible with existing data.
|
||||
**Status**: Ready for deployment
|
||||
@@ -1,292 +0,0 @@
|
||||
# Merge Compatibility Analysis: docker-deploy → master
|
||||
|
||||
## 📊 Merge Status: **SAFE TO MERGE** ✅
|
||||
|
||||
### Conflict Analysis
|
||||
- **No merge conflicts detected** between `master` and `docker-deploy` branches
|
||||
- All changes are additive or modify existing code in compatible ways
|
||||
- The docker-deploy branch adds 13 files with 1034 insertions and 117 deletions
|
||||
|
||||
### Files Changed
|
||||
#### New Files (No conflicts):
|
||||
1. `DOCKER_DEPLOYMENT_GUIDE.md` - Documentation
|
||||
2. `IMPROVEMENTS_APPLIED.md` - Documentation
|
||||
3. `quick-deploy.sh` - Deployment script
|
||||
4. `restore_database.sh` - Restore script
|
||||
5. `setup-volumes.sh` - Setup script
|
||||
|
||||
#### Modified Files:
|
||||
1. `Dockerfile` - Added mariadb-client package
|
||||
2. `docker-compose.yml` - Added /data volume mapping, resource limits
|
||||
3. `py_app/app/database_backup.py` - **CRITICAL: Compatibility layer added**
|
||||
4. `py_app/app/static/css/base.css` - Added sticky header styles
|
||||
5. `py_app/app/static/fg_quality.js` - Quality code display enhancement
|
||||
6. `py_app/app/static/script.js` - Quality code display enhancement
|
||||
7. `py_app/app/templates/fg_scan.html` - Added report-table-container wrapper
|
||||
8. `py_app/app/templates/scan.html` - Added report-table-container wrapper
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Compatibility Layer: database_backup.py
|
||||
|
||||
### Problem Identified
|
||||
The docker-deploy branch changed backup commands from `mysqldump` to `mariadb-dump` and added `--skip-ssl` flag, which would break the application when running with standard Gunicorn (non-Docker) deployment.
|
||||
|
||||
### Solution Implemented
|
||||
Added intelligent environment detection and command selection:
|
||||
|
||||
#### 1. Dynamic Command Detection
|
||||
```python
|
||||
def _detect_dump_command(self):
|
||||
"""Detect which mysqldump command is available (mariadb-dump or mysqldump)"""
|
||||
try:
|
||||
# Try mariadb-dump first (newer MariaDB versions)
|
||||
result = subprocess.run(['which', 'mariadb-dump'],
|
||||
capture_output=True, text=True)
|
||||
if result.returncode == 0:
|
||||
return 'mariadb-dump'
|
||||
|
||||
# Fall back to mysqldump
|
||||
result = subprocess.run(['which', 'mysqldump'],
|
||||
capture_output=True, text=True)
|
||||
if result.returncode == 0:
|
||||
return 'mysqldump'
|
||||
|
||||
# Default to mariadb-dump (will error if not available)
|
||||
return 'mariadb-dump'
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not detect dump command: {e}")
|
||||
return 'mysqldump' # Default fallback
|
||||
```
|
||||
|
||||
#### 2. Conditional SSL Arguments
|
||||
```python
|
||||
def _get_ssl_args(self):
|
||||
"""Get SSL arguments based on environment (Docker needs --skip-ssl)"""
|
||||
# Check if running in Docker container
|
||||
if os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER'):
|
||||
return ['--skip-ssl']
|
||||
return []
|
||||
```
|
||||
|
||||
#### 3. Updated Backup Command Building
|
||||
```python
|
||||
cmd = [
|
||||
self.dump_command, # Uses detected command (mariadb-dump or mysqldump)
|
||||
f"--host={self.config['host']}",
|
||||
f"--port={self.config['port']}",
|
||||
f"--user={self.config['user']}",
|
||||
f"--password={self.config['password']}",
|
||||
]
|
||||
|
||||
# Add SSL args if needed (Docker environment)
|
||||
cmd.extend(self._get_ssl_args())
|
||||
|
||||
# Add backup options
|
||||
cmd.extend([
|
||||
'--single-transaction',
|
||||
'--skip-lock-tables',
|
||||
'--force',
|
||||
# ... other options
|
||||
])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Deployment Scenarios
|
||||
|
||||
### Scenario 1: Docker Deployment (docker-compose)
|
||||
**Environment Detection:**
|
||||
- ✅ `/.dockerenv` file exists
|
||||
- ✅ `DOCKER_CONTAINER` environment variable set in docker-compose.yml
|
||||
|
||||
**Backup Behavior:**
|
||||
- Uses `mariadb-dump` (installed in Dockerfile)
|
||||
- Adds `--skip-ssl` flag automatically
|
||||
- Works correctly ✅
|
||||
|
||||
### Scenario 2: Standard Gunicorn Deployment (systemd service)
|
||||
**Environment Detection:**
|
||||
- ❌ `/.dockerenv` file does NOT exist
|
||||
- ❌ `DOCKER_CONTAINER` environment variable NOT set
|
||||
|
||||
**Backup Behavior:**
|
||||
- Detects available command: `mysqldump` or `mariadb-dump`
|
||||
- Does NOT add `--skip-ssl` flag
|
||||
- Uses system-installed MySQL/MariaDB client tools
|
||||
- Works correctly ✅
|
||||
|
||||
### Scenario 3: Mixed Environment (External Database)
|
||||
**Both deployment types can connect to:**
|
||||
- External MariaDB server
|
||||
- Remote database instance
|
||||
- Local database with proper SSL configuration
|
||||
|
||||
**Backup Behavior:**
|
||||
- Automatically adapts to available tools
|
||||
- SSL handling based on container detection
|
||||
- Works correctly ✅
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing Plan
|
||||
|
||||
### Pre-Merge Testing
|
||||
1. **Docker Environment:**
|
||||
```bash
|
||||
cd /srv/quality_app
|
||||
git checkout docker-deploy
|
||||
docker-compose up -d
|
||||
# Test backup via web UI
|
||||
# Test scheduled backup
|
||||
# Test restore functionality
|
||||
```
|
||||
|
||||
2. **Gunicorn Environment:**
|
||||
```bash
|
||||
# Stop Docker if running
|
||||
docker-compose down
|
||||
|
||||
# Start with systemd service (if available)
|
||||
sudo systemctl start trasabilitate
|
||||
|
||||
# Test backup via web UI
|
||||
# Test scheduled backup
|
||||
# Test restore functionality
|
||||
```
|
||||
|
||||
3. **Command Detection Test:**
|
||||
```bash
|
||||
# Inside Docker container
|
||||
docker-compose exec web python3 -c "
|
||||
from app.database_backup import DatabaseBackupManager
|
||||
manager = DatabaseBackupManager()
|
||||
print(f'Dump command: {manager.dump_command}')
|
||||
print(f'SSL args: {manager._get_ssl_args()}')
|
||||
"
|
||||
|
||||
# On host system (if MySQL client installed)
|
||||
python3 -c "
|
||||
from app.database_backup import DatabaseBackupManager
|
||||
manager = DatabaseBackupManager()
|
||||
print(f'Dump command: {manager.dump_command}')
|
||||
print(f'SSL args: {manager._get_ssl_args()}')
|
||||
"
|
||||
```
|
||||
|
||||
### Post-Merge Testing
|
||||
1. Verify both deployment methods still work
|
||||
2. Test backup/restore in both environments
|
||||
3. Verify scheduled backups function correctly
|
||||
4. Check error handling when tools are missing
|
||||
|
||||
---
|
||||
|
||||
## 📋 Merge Checklist
|
||||
|
||||
- [x] No merge conflicts detected
|
||||
- [x] Compatibility layer implemented in `database_backup.py`
|
||||
- [x] Environment detection for Docker vs Gunicorn
|
||||
- [x] Dynamic command selection (mariadb-dump vs mysqldump)
|
||||
- [x] Conditional SSL flag handling
|
||||
- [x] UI improvements (sticky headers) are purely CSS/JS - no conflicts
|
||||
- [x] Quality code display changes are frontend-only - no conflicts
|
||||
- [x] New documentation files added - no conflicts
|
||||
- [x] Docker-specific files don't affect Gunicorn deployment
|
||||
|
||||
### Safe to Merge Because:
|
||||
1. **Additive Changes**: Most changes are new files or new features
|
||||
2. **Backward Compatible**: Code detects environment and adapts
|
||||
3. **No Breaking Changes**: Gunicorn deployment still works without Docker
|
||||
4. **Independent Features**: UI improvements work in any environment
|
||||
5. **Fail-Safe Defaults**: Falls back to mysqldump if mariadb-dump unavailable
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Merge Process
|
||||
|
||||
### Recommended Steps:
|
||||
```bash
|
||||
cd /srv/quality_app
|
||||
|
||||
# 1. Ensure working directory is clean
|
||||
git status
|
||||
|
||||
# 2. Switch to master branch
|
||||
git checkout master
|
||||
|
||||
# 3. Pull latest changes
|
||||
git pull origin master
|
||||
|
||||
# 4. Merge docker-deploy (should be clean merge)
|
||||
git merge docker-deploy
|
||||
|
||||
# 5. Review merge
|
||||
git log --oneline -10
|
||||
|
||||
# 6. Test in current environment
|
||||
# (If using systemd, test the app)
|
||||
# (If using Docker, test with docker-compose)
|
||||
|
||||
# 7. Push to remote
|
||||
git push origin master
|
||||
|
||||
# 8. Tag the release (optional)
|
||||
git tag -a v2.0-docker -m "Docker deployment support with compatibility layer"
|
||||
git push origin v2.0-docker
|
||||
```
|
||||
|
||||
### Rollback Plan (if needed):
|
||||
```bash
|
||||
# If issues arise after merge
|
||||
git log --oneline -10 # Find commit hash before merge
|
||||
git reset --hard <commit-hash-before-merge>
|
||||
git push origin master --force # Use with caution!
|
||||
|
||||
# Or revert the merge commit
|
||||
git revert -m 1 <merge-commit-hash>
|
||||
git push origin master
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Key Improvements in docker-deploy Branch
|
||||
|
||||
### 1. **Bug Fixes**
|
||||
- Fixed `result_success` variable error → `result.returncode == 0`
|
||||
- Fixed restore SQL parsing with sed preprocessing
|
||||
- Fixed missing mariadb-client in Docker container
|
||||
|
||||
### 2. **Docker Support**
|
||||
- Complete Docker Compose setup
|
||||
- Volume mapping for persistent data
|
||||
- Health checks and resource limits
|
||||
- Environment-based configuration
|
||||
|
||||
### 3. **UI Enhancements**
|
||||
- Sticky table headers for scrollable reports
|
||||
- Quality code 0 displays as "OK" (green)
|
||||
- CSV export preserves original "0" value
|
||||
|
||||
### 4. **Compatibility**
|
||||
- Works in Docker AND traditional Gunicorn deployment
|
||||
- Auto-detects available backup tools
|
||||
- Environment-aware SSL handling
|
||||
- No breaking changes to existing functionality
|
||||
|
||||
---
|
||||
|
||||
## 📞 Support
|
||||
|
||||
If issues arise after merge:
|
||||
1. Check environment detection: `ls -la /.dockerenv`
|
||||
2. Verify backup tools: `which mysqldump mariadb-dump`
|
||||
3. Review logs: `docker-compose logs web` or application logs
|
||||
4. Test backup manually from command line
|
||||
5. Fall back to master branch if critical issues occur
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2025-11-13
|
||||
**Branch:** docker-deploy → master
|
||||
**Status:** Ready for merge ✅
|
||||
Binary file not shown.
@@ -1,73 +0,0 @@
|
||||
# 🚀 Quick Start - Docker Deployment
|
||||
|
||||
## What You Need
|
||||
- A server with Docker installed
|
||||
- 2GB free disk space
|
||||
- Ports 8781 and 3306 available
|
||||
|
||||
## Deploy in 3 Steps
|
||||
|
||||
### 1️⃣ Install Docker (if not already installed)
|
||||
|
||||
**Ubuntu/Debian:**
|
||||
```bash
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sudo sh get-docker.sh
|
||||
sudo usermod -aG docker $USER
|
||||
```
|
||||
Then log out and back in.
|
||||
|
||||
### 2️⃣ Deploy the Application
|
||||
```bash
|
||||
cd /srv/quality_recticel
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
### 3️⃣ Access Your Application
|
||||
Open browser: **http://localhost:8781**
|
||||
|
||||
**Login:**
|
||||
- Username: `superadmin`
|
||||
- Password: `superadmin123`
|
||||
|
||||
## 🎯 Done!
|
||||
|
||||
Your complete application with database is now running in Docker containers.
|
||||
|
||||
## Common Commands
|
||||
|
||||
```bash
|
||||
# View logs
|
||||
docker compose logs -f
|
||||
|
||||
# Stop services
|
||||
docker compose down
|
||||
|
||||
# Restart services
|
||||
docker compose restart
|
||||
|
||||
# Backup database
|
||||
docker compose exec -T db mariadb-dump -u trasabilitate -pInitial01! trasabilitate > backup.sql
|
||||
```
|
||||
|
||||
## 📚 Full Documentation
|
||||
|
||||
See `DOCKER_DEPLOYMENT.md` for complete documentation.
|
||||
|
||||
## 🆘 Problems?
|
||||
|
||||
```bash
|
||||
# Check status
|
||||
docker compose ps
|
||||
|
||||
# View detailed logs
|
||||
docker compose logs -f web
|
||||
|
||||
# Start fresh
|
||||
docker compose down -v
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Note:** This is a production-ready deployment using Gunicorn WSGI server, MariaDB 11.3, and proper health checks.
|
||||
@@ -1,74 +0,0 @@
|
||||
# Quality Recticel Application
|
||||
|
||||
Production traceability and quality management system.
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
All development and deployment documentation has been moved to the **[documentation](./documentation/)** folder.
|
||||
|
||||
### Quick Links
|
||||
|
||||
- **[Documentation Index](./documentation/README.md)** - Complete documentation overview
|
||||
- **[Database Setup](./documentation/DATABASE_DOCKER_SETUP.md)** - Database configuration guide
|
||||
- **[Docker Guide](./documentation/DOCKER_QUICK_REFERENCE.md)** - Docker commands reference
|
||||
- **[Backup System](./documentation/BACKUP_SYSTEM.md)** - Database backup documentation
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
```bash
|
||||
# Start application
|
||||
cd /srv/quality_app/py_app
|
||||
bash start_production.sh
|
||||
|
||||
# Stop application
|
||||
bash stop_production.sh
|
||||
|
||||
# View logs
|
||||
tail -f /srv/quality_app/logs/error.log
|
||||
```
|
||||
|
||||
## 📦 Docker Deployment
|
||||
|
||||
```bash
|
||||
# Start with Docker Compose
|
||||
docker-compose up -d
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f web
|
||||
|
||||
# Stop services
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
## 🔐 Default Access
|
||||
|
||||
- **URL**: http://localhost:8781
|
||||
- **Username**: superadmin
|
||||
- **Password**: superadmin123
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
```
|
||||
quality_app/
|
||||
├── documentation/ # All documentation files
|
||||
├── py_app/ # Flask application
|
||||
├── backups/ # Database backups
|
||||
├── logs/ # Application logs
|
||||
├── docker-compose.yml # Docker configuration
|
||||
└── Dockerfile # Container image definition
|
||||
```
|
||||
|
||||
## 📖 For More Information
|
||||
|
||||
See the **[documentation](./documentation/)** folder for comprehensive guides on:
|
||||
|
||||
- Setup and deployment
|
||||
- Docker configuration
|
||||
- Database management
|
||||
- Backup and restore procedures
|
||||
- Application features
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Last Updated**: November 3, 2025
|
||||
@@ -1,36 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Database Test</title>
|
||||
</head>
|
||||
<body>
|
||||
<h2>Database Connection Test</h2>
|
||||
<button id="test-btn">Test Database</button>
|
||||
<div id="result"></div>
|
||||
|
||||
<script>
|
||||
document.getElementById('test-btn').addEventListener('click', function() {
|
||||
const resultDiv = document.getElementById('result');
|
||||
resultDiv.innerHTML = 'Loading...';
|
||||
|
||||
fetch('/get_unprinted_orders')
|
||||
.then(response => {
|
||||
console.log('Response status:', response.status);
|
||||
if (response.ok) {
|
||||
return response.json();
|
||||
} else {
|
||||
throw new Error('HTTP ' + response.status);
|
||||
}
|
||||
})
|
||||
.then(data => {
|
||||
console.log('Data received:', data);
|
||||
resultDiv.innerHTML = `<pre>${JSON.stringify(data, null, 2)}</pre>`;
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error:', error);
|
||||
resultDiv.innerHTML = 'Error: ' + error.message;
|
||||
});
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,487 +0,0 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block head %}
|
||||
<style>
|
||||
#label-preview {
|
||||
background: #fafafa;
|
||||
position: relative;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
/* Enhanced table styling */
|
||||
.card.scan-table-card table.print-module-table.scan-table thead th {
|
||||
border-bottom: 2px solid #dee2e6 !important;
|
||||
background-color: #f8f9fa !important;
|
||||
padding: 0.25rem 0.4rem !important;
|
||||
text-align: left !important;
|
||||
font-weight: 600 !important;
|
||||
font-size: 10px !important;
|
||||
line-height: 1.2 !important;
|
||||
}
|
||||
|
||||
.card.scan-table-card table.print-module-table.scan-table {
|
||||
width: 100% !important;
|
||||
border-collapse: collapse !important;
|
||||
}
|
||||
|
||||
.card.scan-table-card table.print-module-table.scan-table tbody tr:hover td {
|
||||
background-color: #f8f9fa !important;
|
||||
cursor: pointer !important;
|
||||
}
|
||||
|
||||
.card.scan-table-card table.print-module-table.scan-table tbody tr.selected td {
|
||||
background-color: #007bff !important;
|
||||
color: white !important;
|
||||
}
|
||||
</style>
|
||||
{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="scan-container" style="display: flex; flex-direction: row; gap: 20px; width: 100%; align-items: flex-start;">
|
||||
<!-- Label Preview Card -->
|
||||
<div class="card scan-form-card" style="display: flex; flex-direction: column; justify-content: flex-start; align-items: center; min-height: 700px; width: 330px; flex-shrink: 0; position: relative; padding: 15px;">
|
||||
<div class="label-view-title" style="width: 100%; text-align: center; padding: 0 0 15px 0; font-size: 18px; font-weight: bold; letter-spacing: 0.5px;">Label View</div>
|
||||
|
||||
<!-- Label Preview Section -->
|
||||
<div id="label-preview" style="border: 1px solid #ddd; padding: 10px; position: relative; background: #fafafa; width: 301px; height: 434.7px;">
|
||||
<!-- Label content rectangle -->
|
||||
<div id="label-content" style="position: absolute; top: 65.7px; left: 11.34px; width: 227.4px; height: 321.3px; border: 2px solid #333; background: white;">
|
||||
<!-- Top row content: Company name -->
|
||||
<div style="position: absolute; top: 0; left: 0; right: 0; height: 32.13px; display: flex; align-items: center; justify-content: center; font-weight: bold; font-size: 12px; color: #000; z-index: 10;">
|
||||
INNOFA ROMANIA SRL
|
||||
</div>
|
||||
|
||||
<!-- Row 2 content: Customer Name -->
|
||||
<div id="customer-name-row" style="position: absolute; top: 32.13px; left: 0; right: 0; height: 32.13px; display: flex; align-items: center; justify-content: center; font-size: 11px; color: #000;">
|
||||
<!-- Customer name will be populated here -->
|
||||
</div>
|
||||
|
||||
<!-- Horizontal dividing lines -->
|
||||
<div style="position: absolute; top: 32.13px; left: 0; right: 0; height: 1px; background: #999;"></div>
|
||||
<div style="position: absolute; top: 64.26px; left: 0; right: 0; height: 1px; background: #999;"></div>
|
||||
<div style="position: absolute; top: 96.39px; left: 0; right: 0; height: 1px; background: #999;"></div>
|
||||
<div style="position: absolute; top: 128.52px; left: 0; right: 0; height: 1px; background: #999;"></div>
|
||||
<div style="position: absolute; top: 160.65px; left: 0; right: 0; height: 1px; background: #999;"></div>
|
||||
<div style="position: absolute; top: 224.91px; left: 0; right: 0; height: 1px; background: #999;"></div>
|
||||
<div style="position: absolute; top: 257.04px; left: 0; right: 0; height: 1px; background: #999;"></div>
|
||||
<div style="position: absolute; top: 289.17px; left: 0; right: 0; height: 1px; background: #999;"></div>
|
||||
|
||||
<!-- Vertical dividing line -->
|
||||
<div style="position: absolute; left: 90.96px; top: 64.26px; width: 1px; height: 257.04px; background: #999;"></div>
|
||||
|
||||
<!-- Row 3: Quantity ordered -->
|
||||
<div style="position: absolute; top: 64.26px; left: 0; width: 90.96px; height: 32.13px; display: flex; align-items: center; padding-left: 5px; font-size: 10px; color: #000;">
|
||||
Quantity ordered
|
||||
</div>
|
||||
<div id="quantity-ordered-value" style="position: absolute; top: 64.26px; left: 90.96px; width: 136.44px; height: 32.13px; display: flex; align-items: center; justify-content: center; font-size: 13px; font-weight: bold; color: #000;">
|
||||
<!-- Quantity value will be populated here -->
|
||||
</div>
|
||||
|
||||
<!-- Row 4: Customer order -->
|
||||
<div style="position: absolute; top: 96.39px; left: 0; width: 90.96px; height: 32.13px; display: flex; align-items: center; padding-left: 5px; font-size: 10px; color: #000;">
|
||||
Customer order
|
||||
</div>
|
||||
<div id="client-order-info" style="position: absolute; top: 96.39px; left: 90.96px; width: 136.44px; height: 32.13px; display: flex; align-items: center; justify-content: center; font-size: 12px; font-weight: bold; color: #000;">
|
||||
<!-- Client order info will be populated here -->
|
||||
</div>
|
||||
|
||||
<!-- Row 5: Delivery date -->
|
||||
<div style="position: absolute; top: 128.52px; left: 0; width: 90.96px; height: 32.13px; display: flex; align-items: center; padding-left: 5px; font-size: 10px; color: #000;">
|
||||
Delivery date
|
||||
</div>
|
||||
<div id="delivery-date-value" style="position: absolute; top: 128.52px; left: 90.96px; width: 136.44px; height: 32.13px; display: flex; align-items: center; justify-content: center; font-size: 12px; font-weight: bold; color: #000;">
|
||||
<!-- Delivery date value will be populated here -->
|
||||
</div>
|
||||
|
||||
<!-- Row 6: Description (double height) -->
|
||||
<div style="position: absolute; top: 160.65px; left: 0; width: 90.96px; height: 64.26px; display: flex; align-items: center; padding-left: 5px; font-size: 10px; color: #000;">
|
||||
Product description
|
||||
</div>
|
||||
<div id="description-value" style="position: absolute; top: 160.65px; left: 90.96px; width: 136.44px; height: 64.26px; display: flex; align-items: center; justify-content: center; font-size: 8px; color: #000; text-align: center; padding: 2px; overflow: hidden;">
|
||||
<!-- Description will be populated here -->
|
||||
</div>
|
||||
|
||||
<!-- Row 7: Size -->
|
||||
<div style="position: absolute; top: 224.91px; left: 0; width: 90.96px; height: 32.13px; display: flex; align-items: center; padding-left: 5px; font-size: 10px; color: #000;">
|
||||
Size
|
||||
</div>
|
||||
<div id="size-value" style="position: absolute; top: 224.91px; left: 90.96px; width: 136.44px; height: 32.13px; display: flex; align-items: center; justify-content: center; font-size: 10px; font-weight: bold; color: #000;">
|
||||
<!-- Size value will be populated here -->
|
||||
</div>
|
||||
|
||||
<!-- Row 8: Article Code -->
|
||||
<div style="position: absolute; top: 257.04px; left: 0; width: 90.96px; height: 32.13px; display: flex; align-items: center; padding-left: 5px; font-size: 10px; color: #000;">
|
||||
Article code
|
||||
</div>
|
||||
<div id="article-code-value" style="position: absolute; top: 257.04px; left: 90.96px; width: 136.44px; height: 32.13px; display: flex; align-items: center; justify-content: center; font-size: 9px; font-weight: bold; color: #000;">
|
||||
<!-- Article code will be populated here -->
|
||||
</div>
|
||||
|
||||
<!-- Row 9: Production Order -->
|
||||
<div style="position: absolute; top: 289.17px; left: 0; width: 90.96px; height: 32.13px; display: flex; align-items: center; padding-left: 5px; font-size: 10px; color: #000;">
|
||||
Prod. order
|
||||
</div>
|
||||
<div id="prod-order-value" style="position: absolute; top: 289.17px; left: 90.96px; width: 136.44px; height: 32.13px; display: flex; align-items: center; justify-content: center; font-size: 10px; font-weight: bold; color: #000;">
|
||||
<!-- Production order will be populated here -->
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Bottom barcode section -->
|
||||
<div style="position: absolute; bottom: 28.35px; left: 11.34px; width: 227.4px; height: 28.35px; border: 2px solid #333; background: white; display: flex; align-items: center; justify-content: center;">
|
||||
<div id="barcode-text" style="font-family: 'Courier New', monospace; font-size: 12px; font-weight: bold; letter-spacing: 1px; color: #000;">
|
||||
<!-- Barcode text will be populated here -->
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Vertical barcode (right side) -->
|
||||
<div style="position: absolute; right: 11.34px; top: 65.7px; width: 28.35px; height: 321.3px; border: 2px solid #333; background: white; writing-mode: vertical-lr; text-orientation: sideways; display: flex; align-items: center; justify-content: center;">
|
||||
<div id="vertical-barcode-text" style="font-family: 'Courier New', monospace; font-size: 10px; font-weight: bold; letter-spacing: 1px; color: #000; transform: rotate(180deg);">
|
||||
<!-- Vertical barcode text will be populated here -->
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Print Options -->
|
||||
<div style="width: 100%; margin-top: 20px;">
|
||||
<!-- Print Method Selection -->
|
||||
<div style="margin-bottom: 15px;">
|
||||
<label style="font-size: 12px; font-weight: 600; color: #495057; margin-bottom: 8px; display: block;">
|
||||
📄 Print Method:
|
||||
</label>
|
||||
|
||||
<div class="form-check mb-2">
|
||||
<input class="form-check-input" type="radio" name="printMethod" id="pdfGenerate" value="pdf" checked>
|
||||
<label class="form-check-label" for="pdfGenerate" style="font-size: 11px; line-height: 1.3;">
|
||||
<strong>Generate PDF</strong><br>
|
||||
<span class="text-muted">Create PDF for manual printing (recommended)</span>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Print Button -->
|
||||
<div style="width: 100%; text-align: center; margin-bottom: 15px;">
|
||||
<button id="print-label-btn" class="btn btn-success" style="font-size: 14px; padding: 10px 30px; border-radius: 6px; font-weight: 600;">
|
||||
📄 Generate PDF Labels
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<!-- Print Information -->
|
||||
<div style="width: 100%; text-align: center; color: #6c757d; font-size: 11px; line-height: 1.4;">
|
||||
<div style="margin-bottom: 5px;">Creates sequential labels based on quantity</div>
|
||||
<small>(e.g., CP00000711-001 to CP00000711-063)</small>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Data Preview Card -->
|
||||
<div class="card scan-table-card" style="min-height: 700px; width: calc(100% - 350px); margin: 0;">
|
||||
<h3>Data Preview (Unprinted Orders)</h3>
|
||||
<button id="check-db-btn" class="btn btn-primary mb-3">Load Orders</button>
|
||||
<div class="report-table-container">
|
||||
<table class="scan-table print-module-table">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>ID</th>
|
||||
<th>Comanda Productie</th>
|
||||
<th>Cod Articol</th>
|
||||
<th>Descr. Com. Prod</th>
|
||||
<th>Cantitate</th>
|
||||
<th>Data Livrare</th>
|
||||
<th>Dimensiune</th>
|
||||
<th>Com. Achiz. Client</th>
|
||||
<th>Nr. Linie</th>
|
||||
<th>Customer Name</th>
|
||||
<th>Customer Art. Nr.</th>
|
||||
<th>Open Order</th>
|
||||
<th>Line</th>
|
||||
<th>Printed</th>
|
||||
<th>Created</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="unprinted-orders-table">
|
||||
<!-- Data will be dynamically loaded here -->
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Simplified notification system
|
||||
function showNotification(message, type = 'info') {
|
||||
const existingNotifications = document.querySelectorAll('.notification');
|
||||
existingNotifications.forEach(n => n.remove());
|
||||
|
||||
const notification = document.createElement('div');
|
||||
notification.className = `notification alert alert-${type === 'error' ? 'danger' : type === 'success' ? 'success' : type === 'warning' ? 'warning' : 'info'}`;
|
||||
notification.style.cssText = `
|
||||
position: fixed;
|
||||
top: 20px;
|
||||
right: 20px;
|
||||
z-index: 9999;
|
||||
max-width: 350px;
|
||||
padding: 15px;
|
||||
border-radius: 5px;
|
||||
box-shadow: 0 4px 6px rgba(0,0,0,0.1);
|
||||
`;
|
||||
notification.innerHTML = `
|
||||
<div style="display: flex; align-items: center; justify-content: space-between;">
|
||||
<span style="flex: 1; padding-right: 10px;">${message}</span>
|
||||
<button type="button" onclick="this.parentElement.parentElement.remove()" style="background: none; border: none; font-size: 20px; cursor: pointer;">×</button>
|
||||
</div>
|
||||
`;
|
||||
|
||||
document.body.appendChild(notification);
|
||||
|
||||
setTimeout(() => {
|
||||
if (notification.parentElement) {
|
||||
notification.remove();
|
||||
}
|
||||
}, 5000);
|
||||
}
|
||||
|
||||
// Database loading functionality
|
||||
document.getElementById('check-db-btn').addEventListener('click', function() {
|
||||
const button = this;
|
||||
const originalText = button.textContent;
|
||||
button.textContent = 'Loading...';
|
||||
button.disabled = true;
|
||||
|
||||
fetch('/get_unprinted_orders')
|
||||
.then(response => {
|
||||
if (response.status === 403) {
|
||||
return response.json().then(errorData => {
|
||||
throw new Error(`Access Denied: ${errorData.error}`);
|
||||
});
|
||||
} else if (!response.ok) {
|
||||
return response.text().then(text => {
|
||||
throw new Error(`HTTP ${response.status}: ${text}`);
|
||||
});
|
||||
}
|
||||
return response.json();
|
||||
})
|
||||
.then(data => {
|
||||
console.log('Received data:', data);
|
||||
const tbody = document.getElementById('unprinted-orders-table');
|
||||
tbody.innerHTML = '';
|
||||
|
||||
if (data.length === 0) {
|
||||
tbody.innerHTML = '<tr><td colspan="15" style="text-align: center; padding: 20px; color: #28a745;"><strong>✅ All orders have been printed!</strong><br><small>No unprinted orders remaining.</small></td></tr>';
|
||||
clearLabelPreview();
|
||||
return;
|
||||
}
|
||||
|
||||
data.forEach((order, index) => {
|
||||
const tr = document.createElement('tr');
|
||||
tr.dataset.orderId = order.id;
|
||||
tr.dataset.orderIndex = index;
|
||||
tr.style.cursor = 'pointer';
|
||||
tr.innerHTML = `
|
||||
<td style="font-size: 9px;">${order.id}</td>
|
||||
<td style="font-size: 9px;"><strong>${order.comanda_productie}</strong></td>
|
||||
<td style="font-size: 9px;">${order.cod_articol || '-'}</td>
|
||||
<td style="font-size: 9px;">${order.descr_com_prod}</td>
|
||||
<td style="text-align: right; font-weight: 600; font-size: 9px;">${order.cantitate}</td>
|
||||
<td style="text-align: center; font-size: 9px;">
|
||||
${order.data_livrare ? new Date(order.data_livrare).toLocaleDateString() : '-'}
|
||||
</td>
|
||||
<td style="text-align: center; font-size: 9px;">${order.dimensiune || '-'}</td>
|
||||
<td style="font-size: 9px;">${order.com_achiz_client || '-'}</td>
|
||||
<td style="text-align: right; font-size: 9px;">${order.nr_linie_com_client || '-'}</td>
|
||||
<td style="font-size: 9px;">${order.customer_name || '-'}</td>
|
||||
<td style="font-size: 9px;">${order.customer_article_number || '-'}</td>
|
||||
<td style="font-size: 9px;">${order.open_for_order || '-'}</td>
|
||||
<td style="text-align: right; font-size: 9px;">${order.line_number || '-'}</td>
|
||||
<td style="text-align: center; font-size: 9px;">
|
||||
${order.printed_labels == 1 ?
|
||||
'<span style="color: #28a745; font-weight: bold;">✅ Yes</span>' :
|
||||
'<span style="color: #dc3545;">❌ No</span>'}
|
||||
</td>
|
||||
<td style="font-size: 9px; color: #6c757d;">
|
||||
${order.created_at ? new Date(order.created_at).toLocaleString() : '-'}
|
||||
</td>
|
||||
`;
|
||||
|
||||
tr.addEventListener('click', function() {
|
||||
console.log('Row clicked:', order.id);
|
||||
|
||||
// Remove selection from other rows
|
||||
document.querySelectorAll('.print-module-table tbody tr').forEach(row => {
|
||||
row.classList.remove('selected');
|
||||
const cells = row.querySelectorAll('td');
|
||||
cells.forEach(cell => {
|
||||
cell.style.backgroundColor = '';
|
||||
cell.style.color = '';
|
||||
});
|
||||
});
|
||||
|
||||
// Select this row
|
||||
this.classList.add('selected');
|
||||
const cells = this.querySelectorAll('td');
|
||||
cells.forEach(cell => {
|
||||
cell.style.backgroundColor = '#007bff';
|
||||
cell.style.color = 'white';
|
||||
});
|
||||
|
||||
// Update label preview with selected order data
|
||||
updateLabelPreview(order);
|
||||
});
|
||||
|
||||
tbody.appendChild(tr);
|
||||
});
|
||||
|
||||
// Auto-select first row
|
||||
setTimeout(() => {
|
||||
const firstRow = document.querySelector('.print-module-table tbody tr');
|
||||
if (firstRow && !firstRow.querySelector('td[colspan]')) {
|
||||
firstRow.click();
|
||||
}
|
||||
}, 100);
|
||||
|
||||
showNotification(`✅ Loaded ${data.length} unprinted orders`, 'success');
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error loading orders:', error);
|
||||
const tbody = document.getElementById('unprinted-orders-table');
|
||||
tbody.innerHTML = '<tr><td colspan="15" style="text-align: center; padding: 20px; color: #dc3545;"><strong>❌ Failed to load data</strong><br><small>' + error.message + '</small></td></tr>';
|
||||
showNotification('❌ Failed to load orders: ' + error.message, 'error');
|
||||
})
|
||||
.finally(() => {
|
||||
button.textContent = originalText;
|
||||
button.disabled = false;
|
||||
});
|
||||
});
|
||||
|
||||
// Update label preview with order data
|
||||
function updateLabelPreview(order) {
|
||||
document.getElementById('customer-name-row').textContent = order.customer_name || 'N/A';
|
||||
document.getElementById('quantity-ordered-value').textContent = order.cantitate || '0';
|
||||
document.getElementById('client-order-info').textContent =
|
||||
`${order.com_achiz_client || 'N/A'}-${order.nr_linie_com_client || '00'}`;
|
||||
document.getElementById('delivery-date-value').textContent =
|
||||
order.data_livrare ? new Date(order.data_livrare).toLocaleDateString() : 'N/A';
|
||||
document.getElementById('description-value').textContent = order.descr_com_prod || 'N/A';
|
||||
document.getElementById('size-value').textContent = order.dimensiune || 'N/A';
|
||||
document.getElementById('article-code-value').textContent = order.cod_articol || 'N/A';
|
||||
document.getElementById('prod-order-value').textContent = order.comanda_productie || 'N/A';
|
||||
document.getElementById('barcode-text').textContent = order.comanda_productie || 'N/A';
|
||||
document.getElementById('vertical-barcode-text').textContent =
|
||||
`${order.comanda_productie || '000000'}-${order.nr_linie_com_client ? String(order.nr_linie_com_client).padStart(2, '0') : '00'}`;
|
||||
}
|
||||
|
||||
// Clear label preview when no orders are available
|
||||
function clearLabelPreview() {
|
||||
document.getElementById('customer-name-row').textContent = 'No orders available';
|
||||
document.getElementById('quantity-ordered-value').textContent = '0';
|
||||
document.getElementById('client-order-info').textContent = 'N/A';
|
||||
document.getElementById('delivery-date-value').textContent = 'N/A';
|
||||
document.getElementById('size-value').textContent = 'N/A';
|
||||
document.getElementById('description-value').textContent = 'N/A';
|
||||
document.getElementById('article-code-value').textContent = 'N/A';
|
||||
document.getElementById('prod-order-value').textContent = 'N/A';
|
||||
document.getElementById('barcode-text').textContent = 'N/A';
|
||||
document.getElementById('vertical-barcode-text').textContent = '000000-00';
|
||||
}
|
||||
|
||||
// PDF Generation Handler
|
||||
document.getElementById('print-label-btn').addEventListener('click', function(e) {
|
||||
e.preventDefault();
|
||||
|
||||
// Get selected order
|
||||
const selectedRow = document.querySelector('.print-module-table tbody tr.selected');
|
||||
if (!selectedRow) {
|
||||
showNotification('⚠️ Please select an order first from the table below.', 'warning');
|
||||
return;
|
||||
}
|
||||
|
||||
handlePDFGeneration(selectedRow);
|
||||
});
|
||||
|
||||
// Handle PDF generation
|
||||
function handlePDFGeneration(selectedRow) {
|
||||
const orderId = selectedRow.dataset.orderId;
|
||||
const quantityCell = selectedRow.querySelector('td:nth-child(5)');
|
||||
const quantity = quantityCell ? parseInt(quantityCell.textContent) : 1;
|
||||
const prodOrderCell = selectedRow.querySelector('td:nth-child(2)');
|
||||
const prodOrder = prodOrderCell ? prodOrderCell.textContent.trim() : 'N/A';
|
||||
|
||||
const button = document.getElementById('print-label-btn');
|
||||
const originalText = button.textContent;
|
||||
button.textContent = 'Generating PDF...';
|
||||
button.disabled = true;
|
||||
|
||||
console.log(`Generating PDF for order ${orderId} with ${quantity} labels`);
|
||||
|
||||
// Generate PDF with paper-saving mode enabled (optimized for thermal printers)
|
||||
fetch(`/generate_labels_pdf/${orderId}/true`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
})
|
||||
.then(response => {
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP error! status: ${response.status}`);
|
||||
}
|
||||
return response.blob();
|
||||
})
|
||||
.then(blob => {
|
||||
// Create blob URL for PDF
|
||||
const url = window.URL.createObjectURL(blob);
|
||||
|
||||
// Create download link for PDF
|
||||
const a = document.createElement('a');
|
||||
a.href = url;
|
||||
a.download = `labels_${prodOrder}_${quantity}pcs.pdf`;
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
document.body.removeChild(a);
|
||||
|
||||
// Also open PDF in new tab for printing
|
||||
const printWindow = window.open(url, '_blank');
|
||||
if (printWindow) {
|
||||
printWindow.focus();
|
||||
|
||||
// Wait for PDF to load, then show print dialog
|
||||
setTimeout(() => {
|
||||
printWindow.print();
|
||||
|
||||
// Clean up blob URL after print dialog is shown
|
||||
setTimeout(() => {
|
||||
window.URL.revokeObjectURL(url);
|
||||
}, 2000);
|
||||
}, 1500);
|
||||
} else {
|
||||
// If popup was blocked, clean up immediately
|
||||
setTimeout(() => {
|
||||
window.URL.revokeObjectURL(url);
|
||||
}, 1000);
|
||||
}
|
||||
|
||||
// Show success message
|
||||
showNotification(`✅ PDF generated successfully!\n📊 Order: ${prodOrder}\n📦 Labels: ${quantity} pieces`, 'success');
|
||||
|
||||
// Refresh the orders table to reflect printed status
|
||||
setTimeout(() => {
|
||||
document.getElementById('check-db-btn').click();
|
||||
}, 1000);
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error generating PDF:', error);
|
||||
showNotification('❌ Failed to generate PDF labels. Error: ' + error.message, 'error');
|
||||
})
|
||||
.finally(() => {
|
||||
// Reset button state
|
||||
button.textContent = originalText;
|
||||
button.disabled = false;
|
||||
});
|
||||
}
|
||||
|
||||
// Load orders on page load
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
setTimeout(() => {
|
||||
document.getElementById('check-db-btn').click();
|
||||
}, 500);
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,110 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import mariadb
|
||||
import os
|
||||
import sys
|
||||
|
||||
def get_external_db_connection():
|
||||
"""Reads the external_server.conf file and returns a MariaDB database connection."""
|
||||
# Get the instance folder path
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
instance_folder = os.path.join(current_dir, '../../instance')
|
||||
settings_file = os.path.join(instance_folder, 'external_server.conf')
|
||||
|
||||
if not os.path.exists(settings_file):
|
||||
raise FileNotFoundError(f"The external_server.conf file is missing: {settings_file}")
|
||||
|
||||
# Read settings from the configuration file
|
||||
settings = {}
|
||||
with open(settings_file, 'r') as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line and '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
settings[key] = value
|
||||
|
||||
print(f"Connecting to MariaDB:")
|
||||
print(f" Host: {settings.get('server_domain', 'N/A')}")
|
||||
print(f" Port: {settings.get('port', 'N/A')}")
|
||||
print(f" Database: {settings.get('database_name', 'N/A')}")
|
||||
|
||||
return mariadb.connect(
|
||||
user=settings['username'],
|
||||
password=settings['password'],
|
||||
host=settings['server_domain'],
|
||||
port=int(settings['port']),
|
||||
database=settings['database_name']
|
||||
)
|
||||
|
||||
def main():
|
||||
try:
|
||||
print("=== Adding Email Column to Users Table ===")
|
||||
conn = get_external_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# First, check the current table structure
|
||||
print("\n1. Checking current table structure...")
|
||||
cursor.execute("DESCRIBE users")
|
||||
columns = cursor.fetchall()
|
||||
|
||||
has_email = False
|
||||
for column in columns:
|
||||
print(f" Column: {column[0]} ({column[1]})")
|
||||
if column[0] == 'email':
|
||||
has_email = True
|
||||
|
||||
if not has_email:
|
||||
print("\n2. Adding email column...")
|
||||
cursor.execute("ALTER TABLE users ADD COLUMN email VARCHAR(255)")
|
||||
conn.commit()
|
||||
print(" ✓ Email column added successfully")
|
||||
else:
|
||||
print("\n2. Email column already exists")
|
||||
|
||||
# Now check and display all users
|
||||
print("\n3. Current users in database:")
|
||||
cursor.execute("SELECT id, username, role, email FROM users")
|
||||
users = cursor.fetchall()
|
||||
|
||||
if users:
|
||||
print(f" Found {len(users)} users:")
|
||||
for user in users:
|
||||
email = user[3] if user[3] else "No email"
|
||||
print(f" - ID: {user[0]}, Username: {user[1]}, Role: {user[2]}, Email: {email}")
|
||||
else:
|
||||
print(" No users found - creating test users...")
|
||||
|
||||
# Create some test users
|
||||
test_users = [
|
||||
('admin_user', 'admin123', 'admin', 'admin@company.com'),
|
||||
('manager_user', 'manager123', 'manager', 'manager@company.com'),
|
||||
('warehouse_user', 'warehouse123', 'warehouse_manager', 'warehouse@company.com'),
|
||||
('quality_user', 'quality123', 'quality_manager', 'quality@company.com')
|
||||
]
|
||||
|
||||
for username, password, role, email in test_users:
|
||||
try:
|
||||
cursor.execute("""
|
||||
INSERT INTO users (username, password, role, email)
|
||||
VALUES (%s, %s, %s, %s)
|
||||
""", (username, password, role, email))
|
||||
print(f" ✓ Created user: {username} ({role})")
|
||||
except mariadb.IntegrityError as e:
|
||||
print(f" ⚠ User {username} already exists: {e}")
|
||||
|
||||
conn.commit()
|
||||
print(" ✓ Test users created successfully")
|
||||
|
||||
conn.close()
|
||||
print("\n=== Database Update Complete ===")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,105 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import mariadb
|
||||
import os
|
||||
import sys
|
||||
|
||||
def get_external_db_connection():
|
||||
"""Reads the external_server.conf file and returns a MariaDB database connection."""
|
||||
# Get the instance folder path
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
instance_folder = os.path.join(current_dir, '../../instance')
|
||||
settings_file = os.path.join(instance_folder, 'external_server.conf')
|
||||
|
||||
if not os.path.exists(settings_file):
|
||||
raise FileNotFoundError(f"The external_server.conf file is missing: {settings_file}")
|
||||
|
||||
# Read settings from the configuration file
|
||||
settings = {}
|
||||
with open(settings_file, 'r') as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line and '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
settings[key] = value
|
||||
|
||||
print(f"Connecting to MariaDB with settings:")
|
||||
print(f" Host: {settings.get('server_domain', 'N/A')}")
|
||||
print(f" Port: {settings.get('port', 'N/A')}")
|
||||
print(f" Database: {settings.get('database_name', 'N/A')}")
|
||||
print(f" Username: {settings.get('username', 'N/A')}")
|
||||
|
||||
# Create a database connection
|
||||
return mariadb.connect(
|
||||
user=settings['username'],
|
||||
password=settings['password'],
|
||||
host=settings['server_domain'],
|
||||
port=int(settings['port']),
|
||||
database=settings['database_name']
|
||||
)
|
||||
|
||||
def main():
|
||||
try:
|
||||
print("=== Checking External MariaDB Database ===")
|
||||
conn = get_external_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Create users table if it doesn't exist
|
||||
print("\n1. Creating/verifying users table...")
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
username VARCHAR(50) UNIQUE NOT NULL,
|
||||
password VARCHAR(255) NOT NULL,
|
||||
role VARCHAR(50) NOT NULL,
|
||||
email VARCHAR(255)
|
||||
)
|
||||
''')
|
||||
print(" ✓ Users table created/verified")
|
||||
|
||||
# Check existing users
|
||||
print("\n2. Checking existing users...")
|
||||
cursor.execute("SELECT id, username, role, email FROM users")
|
||||
users = cursor.fetchall()
|
||||
|
||||
if users:
|
||||
print(f" Found {len(users)} existing users:")
|
||||
for user in users:
|
||||
email = user[3] if user[3] else "No email"
|
||||
print(f" - ID: {user[0]}, Username: {user[1]}, Role: {user[2]}, Email: {email}")
|
||||
else:
|
||||
print(" No users found in external database")
|
||||
|
||||
# Create some test users
|
||||
print("\n3. Creating test users...")
|
||||
test_users = [
|
||||
('admin_user', 'admin123', 'admin', 'admin@company.com'),
|
||||
('manager_user', 'manager123', 'manager', 'manager@company.com'),
|
||||
('warehouse_user', 'warehouse123', 'warehouse_manager', 'warehouse@company.com'),
|
||||
('quality_user', 'quality123', 'quality_manager', 'quality@company.com')
|
||||
]
|
||||
|
||||
for username, password, role, email in test_users:
|
||||
try:
|
||||
cursor.execute("""
|
||||
INSERT INTO users (username, password, role, email)
|
||||
VALUES (%s, %s, %s, %s)
|
||||
""", (username, password, role, email))
|
||||
print(f" ✓ Created user: {username} ({role})")
|
||||
except mariadb.IntegrityError as e:
|
||||
print(f" ⚠ User {username} already exists: {e}")
|
||||
|
||||
conn.commit()
|
||||
print(" ✓ Test users created successfully")
|
||||
|
||||
conn.close()
|
||||
print("\n=== Database Check Complete ===")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,60 +0,0 @@
|
||||
import mariadb
|
||||
import os
|
||||
|
||||
def get_external_db_connection():
|
||||
"""Get MariaDB connection using external_server.conf"""
|
||||
settings_file = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../instance/external_server.conf'))
|
||||
settings = {}
|
||||
with open(settings_file, 'r') as f:
|
||||
for line in f:
|
||||
key, value = line.strip().split('=', 1)
|
||||
settings[key] = value
|
||||
return mariadb.connect(
|
||||
user=settings['username'],
|
||||
password=settings['password'],
|
||||
host=settings['server_domain'],
|
||||
port=int(settings['port']),
|
||||
database=settings['database_name']
|
||||
)
|
||||
|
||||
def create_external_users_table():
|
||||
"""Create users table and superadmin user in external MariaDB database"""
|
||||
try:
|
||||
conn = get_external_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Create users table if not exists (MariaDB syntax)
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
username VARCHAR(50) UNIQUE NOT NULL,
|
||||
password VARCHAR(255) NOT NULL,
|
||||
role VARCHAR(50) NOT NULL
|
||||
)
|
||||
''')
|
||||
|
||||
# Insert superadmin user if not exists
|
||||
cursor.execute('''
|
||||
INSERT IGNORE INTO users (username, password, role)
|
||||
VALUES (%s, %s, %s)
|
||||
''', ('superadmin', 'superadmin123', 'superadmin'))
|
||||
|
||||
# Check if user was created/exists
|
||||
cursor.execute("SELECT username, password, role FROM users WHERE username = %s", ('superadmin',))
|
||||
result = cursor.fetchone()
|
||||
|
||||
if result:
|
||||
print(f"SUCCESS: Superadmin user exists in external database")
|
||||
print(f"Username: {result[0]}, Password: {result[1]}, Role: {result[2]}")
|
||||
else:
|
||||
print("ERROR: Failed to create/find superadmin user")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print("External MariaDB users table setup completed.")
|
||||
|
||||
except Exception as e:
|
||||
print(f"ERROR: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_external_users_table()
|
||||
@@ -1,110 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database script to create the order_for_labels table
|
||||
This table will store order information for label generation
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import mariadb
|
||||
from flask import Flask
|
||||
|
||||
# Add the app directory to the path
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
def get_db_connection():
|
||||
"""Get database connection using settings from external_server.conf"""
|
||||
# Go up two levels from this script to reach py_app directory, then to instance
|
||||
app_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
settings_file = os.path.join(app_root, 'instance', 'external_server.conf')
|
||||
|
||||
settings = {}
|
||||
with open(settings_file, 'r') as f:
|
||||
for line in f:
|
||||
key, value = line.strip().split('=', 1)
|
||||
settings[key] = value
|
||||
|
||||
return mariadb.connect(
|
||||
user=settings['username'],
|
||||
password=settings['password'],
|
||||
host=settings['server_domain'],
|
||||
port=int(settings['port']),
|
||||
database=settings['database_name']
|
||||
)
|
||||
|
||||
def create_order_for_labels_table():
|
||||
"""
|
||||
Creates the order_for_labels table with the specified structure
|
||||
"""
|
||||
try:
|
||||
conn = get_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# First check if table already exists
|
||||
cursor.execute("SHOW TABLES LIKE 'order_for_labels'")
|
||||
result = cursor.fetchone()
|
||||
|
||||
if result:
|
||||
print("Table 'order_for_labels' already exists.")
|
||||
# Show current structure
|
||||
cursor.execute("DESCRIBE order_for_labels")
|
||||
columns = cursor.fetchall()
|
||||
print("\nCurrent table structure:")
|
||||
for col in columns:
|
||||
print(f" {col[0]} - {col[1]} {'NULL' if col[2] == 'YES' else 'NOT NULL'}")
|
||||
else:
|
||||
# Create the table
|
||||
create_table_sql = """
|
||||
CREATE TABLE order_for_labels (
|
||||
id BIGINT AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique identifier',
|
||||
comanda_productie VARCHAR(15) NOT NULL COMMENT 'Production Order',
|
||||
cod_articol VARCHAR(15) COMMENT 'Article Code',
|
||||
descr_com_prod VARCHAR(50) NOT NULL COMMENT 'Production Order Description',
|
||||
cantitate INT(3) NOT NULL COMMENT 'Quantity',
|
||||
com_achiz_client VARCHAR(25) COMMENT 'Client Purchase Order',
|
||||
nr_linie_com_client INT(3) COMMENT 'Client Order Line Number',
|
||||
customer_name VARCHAR(50) COMMENT 'Customer Name',
|
||||
customer_article_number VARCHAR(25) COMMENT 'Customer Article Number',
|
||||
open_for_order VARCHAR(25) COMMENT 'Open for Order Status',
|
||||
line_number INT(3) COMMENT 'Line Number',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP COMMENT 'Record creation timestamp',
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'Record update timestamp'
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='Table for storing order information for label generation'
|
||||
"""
|
||||
|
||||
cursor.execute(create_table_sql)
|
||||
conn.commit()
|
||||
print("✅ Table 'order_for_labels' created successfully!")
|
||||
|
||||
# Show the created structure
|
||||
cursor.execute("DESCRIBE order_for_labels")
|
||||
columns = cursor.fetchall()
|
||||
print("\n📋 Table structure:")
|
||||
for col in columns:
|
||||
null_info = 'NULL' if col[2] == 'YES' else 'NOT NULL'
|
||||
default_info = f" DEFAULT {col[4]}" if col[4] else ""
|
||||
print(f" 📌 {col[0]:<25} {col[1]:<20} {null_info}{default_info}")
|
||||
|
||||
conn.close()
|
||||
|
||||
except mariadb.Error as e:
|
||||
print(f"❌ Database error: {e}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("🏗️ Creating order_for_labels table...")
|
||||
print("="*50)
|
||||
|
||||
success = create_order_for_labels_table()
|
||||
|
||||
if success:
|
||||
print("\n✅ Database setup completed successfully!")
|
||||
else:
|
||||
print("\n❌ Database setup failed!")
|
||||
|
||||
print("="*50)
|
||||
@@ -1,141 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import mariadb
|
||||
import os
|
||||
import sys
|
||||
|
||||
def get_external_db_connection():
|
||||
"""Reads the external_server.conf file and returns a MariaDB database connection."""
|
||||
# Get the instance folder path
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
instance_folder = os.path.join(current_dir, '../../instance')
|
||||
settings_file = os.path.join(instance_folder, 'external_server.conf')
|
||||
|
||||
if not os.path.exists(settings_file):
|
||||
raise FileNotFoundError(f"The external_server.conf file is missing: {settings_file}")
|
||||
|
||||
# Read settings from the configuration file
|
||||
settings = {}
|
||||
with open(settings_file, 'r') as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line and '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
settings[key] = value
|
||||
|
||||
return mariadb.connect(
|
||||
user=settings['username'],
|
||||
password=settings['password'],
|
||||
host=settings['server_domain'],
|
||||
port=int(settings['port']),
|
||||
database=settings['database_name']
|
||||
)
|
||||
|
||||
def main():
|
||||
try:
|
||||
print("=== Creating Permission Management Tables ===")
|
||||
conn = get_external_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# 1. Create permissions table
|
||||
print("\n1. Creating permissions table...")
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS permissions (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
permission_key VARCHAR(255) UNIQUE NOT NULL,
|
||||
page VARCHAR(100) NOT NULL,
|
||||
page_name VARCHAR(255) NOT NULL,
|
||||
section VARCHAR(100) NOT NULL,
|
||||
section_name VARCHAR(255) NOT NULL,
|
||||
action VARCHAR(50) NOT NULL,
|
||||
action_name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
)
|
||||
''')
|
||||
print(" ✓ Permissions table created/verified")
|
||||
|
||||
# 2. Create role_permissions table
|
||||
print("\n2. Creating role_permissions table...")
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS role_permissions (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
role VARCHAR(50) NOT NULL,
|
||||
permission_key VARCHAR(255) NOT NULL,
|
||||
granted BOOLEAN DEFAULT TRUE,
|
||||
granted_by VARCHAR(50),
|
||||
granted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||
UNIQUE KEY unique_role_permission (role, permission_key),
|
||||
FOREIGN KEY (permission_key) REFERENCES permissions(permission_key) ON DELETE CASCADE
|
||||
)
|
||||
''')
|
||||
print(" ✓ Role permissions table created/verified")
|
||||
|
||||
# 3. Create role_hierarchy table for role management
|
||||
print("\n3. Creating role_hierarchy table...")
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS role_hierarchy (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
role_name VARCHAR(50) UNIQUE NOT NULL,
|
||||
display_name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
level INT DEFAULT 0,
|
||||
is_active BOOLEAN DEFAULT TRUE,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
)
|
||||
''')
|
||||
print(" ✓ Role hierarchy table created/verified")
|
||||
|
||||
# 4. Create permission_audit_log table for tracking changes
|
||||
print("\n4. Creating permission_audit_log table...")
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS permission_audit_log (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
role VARCHAR(50) NOT NULL,
|
||||
permission_key VARCHAR(255) NOT NULL,
|
||||
action ENUM('granted', 'revoked') NOT NULL,
|
||||
changed_by VARCHAR(50) NOT NULL,
|
||||
changed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
reason TEXT,
|
||||
ip_address VARCHAR(45)
|
||||
)
|
||||
''')
|
||||
print(" ✓ Permission audit log table created/verified")
|
||||
|
||||
conn.commit()
|
||||
|
||||
# 5. Check if we need to populate initial data
|
||||
print("\n5. Checking for existing data...")
|
||||
cursor.execute("SELECT COUNT(*) FROM permissions")
|
||||
permission_count = cursor.fetchone()[0]
|
||||
|
||||
if permission_count == 0:
|
||||
print(" No permissions found - will need to populate with default data")
|
||||
print(" Run 'populate_permissions.py' to initialize the permission system")
|
||||
else:
|
||||
print(f" Found {permission_count} existing permissions")
|
||||
|
||||
cursor.execute("SELECT COUNT(*) FROM role_hierarchy")
|
||||
role_count = cursor.fetchone()[0]
|
||||
|
||||
if role_count == 0:
|
||||
print(" No roles found - will need to populate with default roles")
|
||||
else:
|
||||
print(f" Found {role_count} existing roles")
|
||||
|
||||
conn.close()
|
||||
print("\n=== Permission Database Schema Created Successfully ===")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,45 +0,0 @@
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
def create_roles_and_users_tables(db_path):
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
# Create users table if not exists
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
username TEXT UNIQUE NOT NULL,
|
||||
password TEXT NOT NULL,
|
||||
role TEXT NOT NULL
|
||||
)
|
||||
''')
|
||||
# Insert superadmin user if not exists (default password: 'admin', change after first login)
|
||||
cursor.execute('''
|
||||
INSERT OR IGNORE INTO users (username, password, role)
|
||||
VALUES (?, ?, ?)
|
||||
''', ('superadmin', 'superadmin123', 'superadmin'))
|
||||
# Create roles table if not exists
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS roles (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
name TEXT UNIQUE NOT NULL,
|
||||
access_level TEXT NOT NULL,
|
||||
description TEXT
|
||||
)
|
||||
''')
|
||||
# Insert superadmin role if not exists
|
||||
cursor.execute('''
|
||||
INSERT OR IGNORE INTO roles (name, access_level, description)
|
||||
VALUES (?, ?, ?)
|
||||
''', ('superadmin', 'full', 'Full access to all app areas and functions'))
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Default path to users.db in instance folder
|
||||
instance_folder = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../instance'))
|
||||
if not os.path.exists(instance_folder):
|
||||
os.makedirs(instance_folder)
|
||||
db_path = os.path.join(instance_folder, 'users.db')
|
||||
create_roles_and_users_tables(db_path)
|
||||
print("Roles and users tables created. Superadmin user and role initialized.")
|
||||
@@ -1,42 +0,0 @@
|
||||
import mariadb
|
||||
|
||||
# Database connection credentials
|
||||
db_config = {
|
||||
"user": "trasabilitate",
|
||||
"password": "Initial01!",
|
||||
"host": "localhost",
|
||||
"database": "trasabilitate"
|
||||
}
|
||||
|
||||
# Connect to the database
|
||||
try:
|
||||
conn = mariadb.connect(**db_config)
|
||||
cursor = conn.cursor()
|
||||
print("Connected to the database successfully!")
|
||||
|
||||
# Create the scan1_orders table
|
||||
create_table_query = """
|
||||
CREATE TABLE IF NOT EXISTS scan1_orders (
|
||||
Id INT AUTO_INCREMENT PRIMARY KEY, -- Auto-incremented ID with 6 digits
|
||||
operator_code VARCHAR(4) NOT NULL, -- Operator code with 4 characters
|
||||
CP_full_code VARCHAR(15) NOT NULL UNIQUE, -- Full CP code with up to 15 characters
|
||||
OC1_code VARCHAR(4) NOT NULL, -- OC1 code with 4 characters
|
||||
OC2_code VARCHAR(4) NOT NULL, -- OC2 code with 4 characters
|
||||
CP_base_code VARCHAR(10) GENERATED ALWAYS AS (LEFT(CP_full_code, 10)) STORED, -- Auto-generated base code (first 10 characters of CP_full_code)
|
||||
quality_code INT(3) NOT NULL, -- Quality code with 3 digits
|
||||
date DATE NOT NULL, -- Date in format dd-mm-yyyy
|
||||
time TIME NOT NULL, -- Time in format hh:mm:ss
|
||||
approved_quantity INT DEFAULT 0, -- Auto-incremented quantity for quality_code = 000
|
||||
rejected_quantity INT DEFAULT 0 -- Auto-incremented quantity for quality_code != 000
|
||||
);
|
||||
"""
|
||||
cursor.execute(create_table_query)
|
||||
print("Table 'scan1_orders' created successfully!")
|
||||
|
||||
# Commit changes and close the connection
|
||||
conn.commit()
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
except mariadb.Error as e:
|
||||
print(f"Error connecting to the database: {e}")
|
||||
@@ -1,41 +0,0 @@
|
||||
import mariadb
|
||||
|
||||
# Database connection credentials
|
||||
# (reuse from create_scan_1db.py or update as needed)
|
||||
db_config = {
|
||||
"user": "trasabilitate",
|
||||
"password": "Initial01!",
|
||||
"host": "localhost",
|
||||
"database": "trasabilitate"
|
||||
}
|
||||
|
||||
try:
|
||||
conn = mariadb.connect(**db_config)
|
||||
cursor = conn.cursor()
|
||||
print("Connected to the database successfully!")
|
||||
|
||||
# Create the scanfg_orders table (same structure as scan1_orders)
|
||||
create_table_query = """
|
||||
CREATE TABLE IF NOT EXISTS scanfg_orders (
|
||||
Id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
operator_code VARCHAR(4) NOT NULL,
|
||||
CP_full_code VARCHAR(15) NOT NULL UNIQUE,
|
||||
OC1_code VARCHAR(4) NOT NULL,
|
||||
OC2_code VARCHAR(4) NOT NULL,
|
||||
CP_base_code VARCHAR(10) GENERATED ALWAYS AS (LEFT(CP_full_code, 10)) STORED,
|
||||
quality_code INT(3) NOT NULL,
|
||||
date DATE NOT NULL,
|
||||
time TIME NOT NULL,
|
||||
approved_quantity INT DEFAULT 0,
|
||||
rejected_quantity INT DEFAULT 0
|
||||
);
|
||||
"""
|
||||
cursor.execute(create_table_query)
|
||||
print("Table 'scanfg_orders' created successfully!")
|
||||
|
||||
conn.commit()
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
except mariadb.Error as e:
|
||||
print(f"Error connecting to the database: {e}")
|
||||
@@ -1,70 +0,0 @@
|
||||
import mariadb
|
||||
|
||||
# Database connection credentials
|
||||
db_config = {
|
||||
"user": "trasabilitate",
|
||||
"password": "Initial01!",
|
||||
"host": "localhost",
|
||||
"database": "trasabilitate"
|
||||
}
|
||||
|
||||
# Connect to the database
|
||||
try:
|
||||
conn = mariadb.connect(**db_config)
|
||||
cursor = conn.cursor()
|
||||
print("Connected to the database successfully!")
|
||||
|
||||
# Delete old triggers if they exist
|
||||
try:
|
||||
cursor.execute("DROP TRIGGER IF EXISTS increment_approved_quantity;")
|
||||
print("Old trigger 'increment_approved_quantity' deleted successfully.")
|
||||
except mariadb.Error as e:
|
||||
print(f"Error deleting old trigger 'increment_approved_quantity': {e}")
|
||||
|
||||
try:
|
||||
cursor.execute("DROP TRIGGER IF EXISTS increment_rejected_quantity;")
|
||||
print("Old trigger 'increment_rejected_quantity' deleted successfully.")
|
||||
except mariadb.Error as e:
|
||||
print(f"Error deleting old trigger 'increment_rejected_quantity': {e}")
|
||||
|
||||
# Create corrected trigger for approved_quantity
|
||||
create_approved_trigger = """
|
||||
CREATE TRIGGER increment_approved_quantity
|
||||
BEFORE INSERT ON scan1_orders
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
IF NEW.quality_code = 000 THEN
|
||||
SET NEW.approved_quantity = (
|
||||
SELECT COUNT(*)
|
||||
FROM scan1_orders
|
||||
WHERE CP_base_code = NEW.CP_base_code AND quality_code = 000
|
||||
) + 1;
|
||||
SET NEW.rejected_quantity = (
|
||||
SELECT COUNT(*)
|
||||
FROM scan1_orders
|
||||
WHERE CP_base_code = NEW.CP_base_code AND quality_code != 000
|
||||
);
|
||||
ELSE
|
||||
SET NEW.approved_quantity = (
|
||||
SELECT COUNT(*)
|
||||
FROM scan1_orders
|
||||
WHERE CP_base_code = NEW.CP_base_code AND quality_code = 000
|
||||
);
|
||||
SET NEW.rejected_quantity = (
|
||||
SELECT COUNT(*)
|
||||
FROM scan1_orders
|
||||
WHERE CP_base_code = NEW.CP_base_code AND quality_code != 000
|
||||
) + 1;
|
||||
END IF;
|
||||
END;
|
||||
"""
|
||||
cursor.execute(create_approved_trigger)
|
||||
print("Trigger 'increment_approved_quantity' created successfully!")
|
||||
|
||||
# Commit changes and close the connection
|
||||
conn.commit()
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
except mariadb.Error as e:
|
||||
print(f"Error connecting to the database or creating triggers: {e}")
|
||||
@@ -1,73 +0,0 @@
|
||||
import mariadb
|
||||
|
||||
# Database connection credentials
|
||||
db_config = {
|
||||
"user": "trasabilitate",
|
||||
"password": "Initial01!",
|
||||
"host": "localhost",
|
||||
"database": "trasabilitate"
|
||||
}
|
||||
|
||||
# Connect to the database
|
||||
try:
|
||||
conn = mariadb.connect(**db_config)
|
||||
cursor = conn.cursor()
|
||||
print("Connected to the database successfully!")
|
||||
|
||||
# Delete old triggers if they exist
|
||||
try:
|
||||
cursor.execute("DROP TRIGGER IF EXISTS increment_approved_quantity_fg;")
|
||||
print("Old trigger 'increment_approved_quantity_fg' deleted successfully.")
|
||||
except mariadb.Error as e:
|
||||
print(f"Error deleting old trigger 'increment_approved_quantity_fg': {e}")
|
||||
|
||||
try:
|
||||
cursor.execute("DROP TRIGGER IF EXISTS increment_rejected_quantity_fg;")
|
||||
print("Old trigger 'increment_rejected_quantity_fg' deleted successfully.")
|
||||
except mariadb.Error as e:
|
||||
print(f"Error deleting old trigger 'increment_rejected_quantity_fg': {e}")
|
||||
|
||||
# Create corrected trigger for approved_quantity in scanfg_orders
|
||||
create_approved_trigger_fg = """
|
||||
CREATE TRIGGER increment_approved_quantity_fg
|
||||
BEFORE INSERT ON scanfg_orders
|
||||
FOR EACH ROW
|
||||
BEGIN
|
||||
IF NEW.quality_code = 000 THEN
|
||||
SET NEW.approved_quantity = (
|
||||
SELECT COUNT(*)
|
||||
FROM scanfg_orders
|
||||
WHERE CP_base_code = NEW.CP_base_code AND quality_code = 000
|
||||
) + 1;
|
||||
SET NEW.rejected_quantity = (
|
||||
SELECT COUNT(*)
|
||||
FROM scanfg_orders
|
||||
WHERE CP_base_code = NEW.CP_base_code AND quality_code != 000
|
||||
);
|
||||
ELSE
|
||||
SET NEW.approved_quantity = (
|
||||
SELECT COUNT(*)
|
||||
FROM scanfg_orders
|
||||
WHERE CP_base_code = NEW.CP_base_code AND quality_code = 000
|
||||
);
|
||||
SET NEW.rejected_quantity = (
|
||||
SELECT COUNT(*)
|
||||
FROM scanfg_orders
|
||||
WHERE CP_base_code = NEW.CP_base_code AND quality_code != 000
|
||||
) + 1;
|
||||
END IF;
|
||||
END;
|
||||
"""
|
||||
cursor.execute(create_approved_trigger_fg)
|
||||
print("Trigger 'increment_approved_quantity_fg' created successfully for scanfg_orders table!")
|
||||
|
||||
# Commit changes and close the connection
|
||||
conn.commit()
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
print("\n✅ All triggers for scanfg_orders table created successfully!")
|
||||
print("The approved_quantity and rejected_quantity will now be calculated automatically.")
|
||||
|
||||
except mariadb.Error as e:
|
||||
print(f"Error connecting to the database or creating triggers: {e}")
|
||||
@@ -1,25 +0,0 @@
|
||||
import mariadb
|
||||
from app.warehouse import get_db_connection
|
||||
from flask import Flask
|
||||
import os
|
||||
|
||||
def create_warehouse_locations_table():
|
||||
conn = get_db_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS warehouse_locations (
|
||||
id BIGINT AUTO_INCREMENT PRIMARY KEY,
|
||||
location_code VARCHAR(12) NOT NULL UNIQUE,
|
||||
size INT,
|
||||
description VARCHAR(250)
|
||||
)
|
||||
''')
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
instance_path = os.path.abspath("instance")
|
||||
app = Flask(__name__, instance_path=instance_path)
|
||||
with app.app_context():
|
||||
create_warehouse_locations_table()
|
||||
print("warehouse_locations table created or already exists.")
|
||||
@@ -1,30 +0,0 @@
|
||||
import mariadb
|
||||
|
||||
# Database connection credentials
|
||||
def get_db_connection():
|
||||
return mariadb.connect(
|
||||
user="trasabilitate", # Replace with your username
|
||||
password="Initial01!", # Replace with your password
|
||||
host="localhost", # Replace with your host
|
||||
port=3306, # Default MariaDB port
|
||||
database="trasabilitate_database" # Replace with your database name
|
||||
)
|
||||
|
||||
try:
|
||||
# Connect to the database
|
||||
conn = get_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Delete query
|
||||
delete_query = "DELETE FROM scan1_orders"
|
||||
cursor.execute(delete_query)
|
||||
conn.commit()
|
||||
|
||||
print("All data from the 'scan1_orders' table has been deleted successfully.")
|
||||
|
||||
# Close the connection
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
except mariadb.Error as e:
|
||||
print(f"Error deleting data: {e}")
|
||||
@@ -1,26 +0,0 @@
|
||||
import mariadb
|
||||
import os
|
||||
|
||||
def get_external_db_connection():
|
||||
settings_file = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../instance/external_server.conf'))
|
||||
settings = {}
|
||||
with open(settings_file, 'r') as f:
|
||||
for line in f:
|
||||
key, value = line.strip().split('=', 1)
|
||||
settings[key] = value
|
||||
return mariadb.connect(
|
||||
user=settings['username'],
|
||||
password=settings['password'],
|
||||
host=settings['server_domain'],
|
||||
port=int(settings['port']),
|
||||
database=settings['database_name']
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
conn = get_external_db_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("DROP TABLE IF EXISTS users")
|
||||
cursor.execute("DROP TABLE IF EXISTS roles")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print("Dropped users and roles tables from external database.")
|
||||
@@ -1,53 +0,0 @@
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
def check_database(db_path, description):
|
||||
"""Check if a database exists and show its users."""
|
||||
if os.path.exists(db_path):
|
||||
print(f"\n{description}: FOUND at {db_path}")
|
||||
try:
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if users table exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='users'")
|
||||
if cursor.fetchone():
|
||||
cursor.execute("SELECT id, username, password, role FROM users")
|
||||
users = cursor.fetchall()
|
||||
if users:
|
||||
print("Users in this database:")
|
||||
for user in users:
|
||||
print(f" ID: {user[0]}, Username: {user[1]}, Password: {user[2]}, Role: {user[3]}")
|
||||
else:
|
||||
print(" Users table exists but is empty")
|
||||
else:
|
||||
print(" No users table found")
|
||||
conn.close()
|
||||
except Exception as e:
|
||||
print(f" Error reading database: {e}")
|
||||
else:
|
||||
print(f"\n{description}: NOT FOUND at {db_path}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Check different possible locations for users.db
|
||||
|
||||
# 1. Root quality_recticel/instance/users.db
|
||||
root_instance = "/home/ske087/quality_recticel/instance/users.db"
|
||||
check_database(root_instance, "Root instance users.db")
|
||||
|
||||
# 2. App instance folder
|
||||
app_instance = "/home/ske087/quality_recticel/py_app/instance/users.db"
|
||||
check_database(app_instance, "App instance users.db")
|
||||
|
||||
# 3. Current working directory
|
||||
cwd_db = "/home/ske087/quality_recticel/py_app/users.db"
|
||||
check_database(cwd_db, "Working directory users.db")
|
||||
|
||||
# 4. Flask app database (relative to py_app)
|
||||
flask_db = "/home/ske087/quality_recticel/py_app/app/users.db"
|
||||
check_database(flask_db, "Flask app users.db")
|
||||
|
||||
print("\n" + "="*50)
|
||||
print("RECOMMENDATION:")
|
||||
print("The login should use the external MariaDB database.")
|
||||
print("Make sure you have created the superadmin user in MariaDB using create_roles_table.py")
|
||||
@@ -1,143 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import mariadb
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Add the app directory to the path so we can import our permissions module
|
||||
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
|
||||
|
||||
from permissions import APP_PERMISSIONS, ROLE_HIERARCHY, ACTIONS, get_all_permissions, get_default_permissions_for_role
|
||||
|
||||
def get_external_db_connection():
|
||||
"""Reads the external_server.conf file and returns a MariaDB database connection."""
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
instance_folder = os.path.join(current_dir, '../../instance')
|
||||
settings_file = os.path.join(instance_folder, 'external_server.conf')
|
||||
|
||||
if not os.path.exists(settings_file):
|
||||
raise FileNotFoundError(f"The external_server.conf file is missing: {settings_file}")
|
||||
|
||||
settings = {}
|
||||
with open(settings_file, 'r') as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line and '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
settings[key] = value
|
||||
|
||||
return mariadb.connect(
|
||||
user=settings['username'],
|
||||
password=settings['password'],
|
||||
host=settings['server_domain'],
|
||||
port=int(settings['port']),
|
||||
database=settings['database_name']
|
||||
)
|
||||
|
||||
def main():
|
||||
try:
|
||||
print("=== Populating Permission System ===")
|
||||
conn = get_external_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# 1. Populate all permissions
|
||||
print("\n1. Populating permissions...")
|
||||
permissions = get_all_permissions()
|
||||
|
||||
for perm in permissions:
|
||||
try:
|
||||
cursor.execute('''
|
||||
INSERT INTO permissions (permission_key, page, page_name, section, section_name, action, action_name)
|
||||
VALUES (%s, %s, %s, %s, %s, %s, %s)
|
||||
ON DUPLICATE KEY UPDATE
|
||||
page_name = VALUES(page_name),
|
||||
section_name = VALUES(section_name),
|
||||
action_name = VALUES(action_name),
|
||||
updated_at = CURRENT_TIMESTAMP
|
||||
''', (
|
||||
perm['key'],
|
||||
perm['page'],
|
||||
perm['page_name'],
|
||||
perm['section'],
|
||||
perm['section_name'],
|
||||
perm['action'],
|
||||
perm['action_name']
|
||||
))
|
||||
except Exception as e:
|
||||
print(f" ⚠ Error inserting permission {perm['key']}: {e}")
|
||||
|
||||
conn.commit()
|
||||
print(f" ✓ Populated {len(permissions)} permissions")
|
||||
|
||||
# 2. Populate role hierarchy
|
||||
print("\n2. Populating role hierarchy...")
|
||||
for role_name, role_data in ROLE_HIERARCHY.items():
|
||||
try:
|
||||
cursor.execute('''
|
||||
INSERT INTO role_hierarchy (role_name, display_name, description, level)
|
||||
VALUES (%s, %s, %s, %s)
|
||||
ON DUPLICATE KEY UPDATE
|
||||
display_name = VALUES(display_name),
|
||||
description = VALUES(description),
|
||||
level = VALUES(level),
|
||||
updated_at = CURRENT_TIMESTAMP
|
||||
''', (
|
||||
role_name,
|
||||
role_data['name'],
|
||||
role_data['description'],
|
||||
role_data['level']
|
||||
))
|
||||
except Exception as e:
|
||||
print(f" ⚠ Error inserting role {role_name}: {e}")
|
||||
|
||||
conn.commit()
|
||||
print(f" ✓ Populated {len(ROLE_HIERARCHY)} roles")
|
||||
|
||||
# 3. Set default permissions for each role
|
||||
print("\n3. Setting default role permissions...")
|
||||
for role_name in ROLE_HIERARCHY.keys():
|
||||
default_permissions = get_default_permissions_for_role(role_name)
|
||||
|
||||
print(f" Setting permissions for {role_name}: {len(default_permissions)} permissions")
|
||||
|
||||
for permission_key in default_permissions:
|
||||
try:
|
||||
cursor.execute('''
|
||||
INSERT INTO role_permissions (role, permission_key, granted, granted_by)
|
||||
VALUES (%s, %s, TRUE, 'system')
|
||||
ON DUPLICATE KEY UPDATE
|
||||
granted = TRUE,
|
||||
updated_at = CURRENT_TIMESTAMP
|
||||
''', (role_name, permission_key))
|
||||
except Exception as e:
|
||||
print(f" ⚠ Error setting permission {permission_key} for {role_name}: {e}")
|
||||
|
||||
conn.commit()
|
||||
|
||||
# 4. Show summary
|
||||
print("\n4. Permission Summary:")
|
||||
cursor.execute('''
|
||||
SELECT r.role_name, r.display_name, COUNT(rp.permission_key) as permission_count
|
||||
FROM role_hierarchy r
|
||||
LEFT JOIN role_permissions rp ON r.role_name = rp.role AND rp.granted = TRUE
|
||||
GROUP BY r.role_name, r.display_name
|
||||
ORDER BY r.level DESC
|
||||
''')
|
||||
|
||||
results = cursor.fetchall()
|
||||
for role_name, display_name, count in results:
|
||||
print(f" {display_name} ({role_name}): {count} permissions")
|
||||
|
||||
conn.close()
|
||||
print("\n=== Permission System Initialization Complete ===")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,30 +0,0 @@
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
instance_folder = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../instance'))
|
||||
db_path = os.path.join(instance_folder, 'users.db')
|
||||
|
||||
if not os.path.exists(db_path):
|
||||
print("users.db not found at", db_path)
|
||||
exit(1)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if users table exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='users'")
|
||||
if not cursor.fetchone():
|
||||
print("No users table found in users.db.")
|
||||
conn.close()
|
||||
exit(1)
|
||||
|
||||
# Print all users
|
||||
cursor.execute("SELECT id, username, password, role FROM users")
|
||||
rows = cursor.fetchall()
|
||||
if not rows:
|
||||
print("No users found in users.db.")
|
||||
else:
|
||||
print("Users in users.db:")
|
||||
for row in rows:
|
||||
print(f"id={row[0]}, username={row[1]}, password={row[2]}, role={row[3]}")
|
||||
conn.close()
|
||||
@@ -1,34 +0,0 @@
|
||||
import mariadb
|
||||
|
||||
# Database connection credentials
|
||||
db_config = {
|
||||
"user": "trasabilitate",
|
||||
"password": "Initial01!",
|
||||
"host": "localhost",
|
||||
"database": "trasabilitate_database"
|
||||
}
|
||||
|
||||
try:
|
||||
# Connect to the database
|
||||
conn = mariadb.connect(**db_config)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Query to fetch all records from the scan1 table
|
||||
query = "SELECT * FROM scan1_orders ORDER BY Id DESC LIMIT 15"
|
||||
cursor.execute(query)
|
||||
|
||||
# Fetch and print the results
|
||||
rows = cursor.fetchall()
|
||||
if rows:
|
||||
print("Records in the 'scan1_orders' table:")
|
||||
for row in rows:
|
||||
print(row)
|
||||
else:
|
||||
print("No records found in the 'scan1_orders' table.")
|
||||
|
||||
# Close the connection
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
except mariadb.Error as e:
|
||||
print(f"Error connecting to the database: {e}")
|
||||
@@ -1,50 +0,0 @@
|
||||
import mariadb
|
||||
|
||||
# Database connection credentials
|
||||
DB_CONFIG = {
|
||||
"user": "sa",
|
||||
"password": "12345678",
|
||||
"host": "localhost",
|
||||
"database": "recticel"
|
||||
}
|
||||
|
||||
def recreate_order_for_labels_table():
|
||||
conn = mariadb.connect(**DB_CONFIG)
|
||||
cursor = conn.cursor()
|
||||
print("Connected to the database successfully!")
|
||||
|
||||
# Drop the table if it exists
|
||||
cursor.execute("DROP TABLE IF EXISTS order_for_labels")
|
||||
print("Dropped existing 'order_for_labels' table.")
|
||||
|
||||
# Create the table with the new unique constraint
|
||||
create_table_sql = """
|
||||
CREATE TABLE order_for_labels (
|
||||
id BIGINT AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique identifier',
|
||||
comanda_productie VARCHAR(15) NOT NULL UNIQUE COMMENT 'Production Order (unique)',
|
||||
cod_articol VARCHAR(15) COMMENT 'Article Code',
|
||||
descr_com_prod VARCHAR(50) NOT NULL COMMENT 'Production Order Description',
|
||||
cantitate INT(3) NOT NULL COMMENT 'Quantity',
|
||||
data_livrare DATE COMMENT 'Delivery date',
|
||||
dimensiune VARCHAR(20) COMMENT 'Dimensions',
|
||||
com_achiz_client VARCHAR(25) COMMENT 'Client Purchase Order',
|
||||
nr_linie_com_client INT(3) COMMENT 'Client Order Line Number',
|
||||
customer_name VARCHAR(50) COMMENT 'Customer Name',
|
||||
customer_article_number VARCHAR(25) COMMENT 'Customer Article Number',
|
||||
open_for_order VARCHAR(25) COMMENT 'Open for Order Status',
|
||||
line_number INT(3) COMMENT 'Line Number',
|
||||
printed_labels TINYINT(1) NOT NULL DEFAULT 0 COMMENT 'Boolean flag: 0=labels not printed, 1=labels printed',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP COMMENT 'Record creation timestamp',
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'Record update timestamp'
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='Table for storing order information for label generation';
|
||||
"""
|
||||
cursor.execute(create_table_sql)
|
||||
print("Created new 'order_for_labels' table with unique comanda_productie.")
|
||||
|
||||
conn.commit()
|
||||
cursor.close()
|
||||
conn.close()
|
||||
print("Done.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
recreate_order_for_labels_table()
|
||||
@@ -1,34 +0,0 @@
|
||||
import sqlite3
|
||||
import os
|
||||
from flask import Flask
|
||||
|
||||
app = Flask(__name__)
|
||||
app.config['SECRET_KEY'] = 'your_secret_key' # Use the same key as in __init__.py
|
||||
|
||||
instance_folder = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../instance'))
|
||||
if not os.path.exists(instance_folder):
|
||||
os.makedirs(instance_folder)
|
||||
db_path = os.path.join(instance_folder, 'users.db')
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Create users table if not exists
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
username TEXT UNIQUE NOT NULL,
|
||||
password TEXT NOT NULL,
|
||||
role TEXT NOT NULL
|
||||
)
|
||||
''')
|
||||
|
||||
# Insert superadmin user if not exists
|
||||
cursor.execute('''
|
||||
INSERT OR IGNORE INTO users (username, password, role)
|
||||
VALUES (?, ?, ?)
|
||||
''', ('superadmin', 'superadmin123', 'superadmin'))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print("Internal users.db seeded with superadmin user.")
|
||||
@@ -1,37 +0,0 @@
|
||||
import mariadb
|
||||
|
||||
# Database connection credentials
|
||||
def get_db_connection():
|
||||
return mariadb.connect(
|
||||
user="trasabilitate", # Replace with your username
|
||||
password="Initial01!", # Replace with your password
|
||||
host="localhost", # Replace with your host
|
||||
port=3306, # Default MariaDB port
|
||||
database="trasabilitate_database" # Replace with your database name
|
||||
)
|
||||
|
||||
try:
|
||||
# Connect to the database
|
||||
conn = get_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Insert query
|
||||
insert_query = """
|
||||
INSERT INTO scan1_orders (operator_code, CP_full_code, OC1_code, OC2_code, quality_code, date, time)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||
"""
|
||||
# Values to insert
|
||||
values = ('OP01', 'CP12345678-0002', 'OC11', 'OC22', 000, '2025-04-22', '14:30:00')
|
||||
|
||||
# Execute the query
|
||||
cursor.execute(insert_query, values)
|
||||
conn.commit()
|
||||
|
||||
print("Test data inserted successfully into scan1_orders.")
|
||||
|
||||
# Close the connection
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
except mariadb.Error as e:
|
||||
print(f"Error inserting data: {e}")
|
||||
@@ -1,361 +0,0 @@
|
||||
# Quality Recticel Windows Print Service - Installation Guide
|
||||
|
||||
## 📋 Overview
|
||||
|
||||
The Quality Recticel Windows Print Service enables **silent PDF printing** directly from the web application through a Chrome extension. This system eliminates the need for manual PDF downloads and provides seamless label printing functionality.
|
||||
|
||||
## 🏗️ System Architecture
|
||||
|
||||
```
|
||||
Web Application (print_module.html)
|
||||
↓
|
||||
Windows Print Service (localhost:8765)
|
||||
↓
|
||||
Chrome Extension (Native Messaging)
|
||||
↓
|
||||
Windows Print System
|
||||
```
|
||||
|
||||
## 📦 Package Contents
|
||||
|
||||
```
|
||||
windows_print_service/
|
||||
├── print_service.py # Main Windows service (Flask API)
|
||||
├── service_manager.py # Service installation & management
|
||||
├── install_service.bat # Automated installation script
|
||||
├── chrome_extension/ # Chrome extension files
|
||||
│ ├── manifest.json # Extension configuration
|
||||
│ ├── background.js # Service worker
|
||||
│ ├── content.js # Page integration
|
||||
│ ├── popup.html # Extension UI
|
||||
│ ├── popup.js # Extension logic
|
||||
│ └── icons/ # Extension icons
|
||||
└── INSTALLATION_GUIDE.md # This documentation
|
||||
```
|
||||
|
||||
## 🔧 Prerequisites
|
||||
|
||||
### System Requirements
|
||||
- **Operating System**: Windows 10/11 (64-bit)
|
||||
- **Python**: Python 3.8 or higher
|
||||
- **Browser**: Google Chrome (latest version)
|
||||
- **Privileges**: Administrator access required for installation
|
||||
|
||||
### Python Dependencies
|
||||
The following packages will be installed automatically:
|
||||
- `flask` - Web service framework
|
||||
- `flask-cors` - Cross-origin resource sharing
|
||||
- `requests` - HTTP client library
|
||||
- `pywin32` - Windows service integration
|
||||
|
||||
## 🚀 Installation Process
|
||||
|
||||
### Step 1: Download and Extract Files
|
||||
|
||||
1. Download the `windows_print_service` folder to your system
|
||||
2. Extract to a permanent location (e.g., `C:\QualityRecticel\PrintService\`)
|
||||
3. **Do not move or delete this folder after installation**
|
||||
|
||||
### Step 2: Install Windows Service
|
||||
|
||||
#### Method A: Automated Installation (Recommended)
|
||||
|
||||
1. **Right-click** on `install_service.bat`
|
||||
2. Select **"Run as administrator"**
|
||||
3. Click **"Yes"** when Windows UAC prompt appears
|
||||
4. Wait for installation to complete
|
||||
|
||||
#### Method B: Manual Installation
|
||||
|
||||
If the automated script fails, follow these steps:
|
||||
|
||||
```bash
|
||||
# Open Command Prompt as Administrator
|
||||
cd C:\path\to\windows_print_service
|
||||
|
||||
# Install Python dependencies
|
||||
pip install flask flask-cors requests pywin32
|
||||
|
||||
# Install Windows service
|
||||
python service_manager.py install
|
||||
|
||||
# Add firewall exception
|
||||
netsh advfirewall firewall add rule name="Quality Recticel Print Service" dir=in action=allow protocol=TCP localport=8765
|
||||
|
||||
# Create Chrome extension registry entry
|
||||
reg add "HKEY_CURRENT_USER\Software\Google\Chrome\NativeMessagingHosts\com.qualityrecticel.printservice" /ve /d "%cd%\chrome_extension\manifest.json" /f
|
||||
```
|
||||
|
||||
### Step 3: Install Chrome Extension
|
||||
|
||||
1. Open **Google Chrome**
|
||||
2. Navigate to `chrome://extensions/`
|
||||
3. Enable **"Developer mode"** (toggle in top-right corner)
|
||||
4. Click **"Load unpacked"**
|
||||
5. Select the `chrome_extension` folder
|
||||
6. Verify the extension appears with a printer icon
|
||||
|
||||
### Step 4: Verify Installation
|
||||
|
||||
#### Check Windows Service Status
|
||||
|
||||
1. Press `Win + R`, type `services.msc`, press Enter
|
||||
2. Look for **"Quality Recticel Print Service"**
|
||||
3. Status should show **"Running"**
|
||||
4. Startup type should be **"Automatic"**
|
||||
|
||||
#### Test API Endpoints
|
||||
|
||||
Open a web browser and visit:
|
||||
- **Health Check**: `http://localhost:8765/health`
|
||||
- **Printer List**: `http://localhost:8765/printers`
|
||||
|
||||
Expected response for health check:
|
||||
```json
|
||||
{
|
||||
"status": "healthy",
|
||||
"service": "Quality Recticel Print Service",
|
||||
"version": "1.0",
|
||||
"timestamp": "2025-09-21T10:30:00"
|
||||
}
|
||||
```
|
||||
|
||||
#### Test Chrome Extension
|
||||
|
||||
1. Click the extension icon in Chrome toolbar
|
||||
2. Verify it shows "Service Status: Connected ✅"
|
||||
3. Check that printers are listed
|
||||
4. Try the "Test Print" button
|
||||
|
||||
## 🔄 Web Application Integration
|
||||
|
||||
The web application automatically detects the Windows service and adapts the user interface:
|
||||
|
||||
### Service Available (Green Button)
|
||||
- Button text: **"🖨️ Print Labels (Silent)"**
|
||||
- Functionality: Direct printing to default printer
|
||||
- User experience: Click → Labels print immediately
|
||||
|
||||
### Service Unavailable (Blue Button)
|
||||
- Button text: **"📄 Generate PDF"**
|
||||
- Functionality: PDF download for manual printing
|
||||
- User experience: Click → PDF downloads to browser
|
||||
|
||||
### Detection Logic
|
||||
```javascript
|
||||
// Automatic service detection on page load
|
||||
const response = await fetch('http://localhost:8765/health');
|
||||
if (response.ok) {
|
||||
// Service available - enable silent printing
|
||||
} else {
|
||||
// Service unavailable - fallback to PDF download
|
||||
}
|
||||
```
|
||||
|
||||
## 🛠️ Configuration
|
||||
|
||||
### Service Configuration
|
||||
|
||||
The service runs with the following default settings:
|
||||
|
||||
| Setting | Value | Description |
|
||||
|---------|-------|-------------|
|
||||
| **Port** | 8765 | Local API port |
|
||||
| **Host** | localhost | Service binding |
|
||||
| **Startup** | Automatic | Starts with Windows |
|
||||
| **Printer** | Default | Uses system default printer |
|
||||
| **Copies** | 1 | Default print copies |
|
||||
|
||||
### Chrome Extension Permissions
|
||||
|
||||
The extension requires these permissions:
|
||||
- `printing` - Access to printer functionality
|
||||
- `nativeMessaging` - Communication with Windows service
|
||||
- `activeTab` - Access to current webpage
|
||||
- `storage` - Save extension settings
|
||||
|
||||
## 🔍 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### 1. Service Not Starting
|
||||
**Symptoms**: API not accessible at localhost:8765
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check service status
|
||||
python -c "from service_manager import service_status; service_status()"
|
||||
|
||||
# Restart service manually
|
||||
python service_manager.py restart
|
||||
|
||||
# Check Windows Event Viewer for service errors
|
||||
```
|
||||
|
||||
#### 2. Chrome Extension Not Working
|
||||
**Symptoms**: Extension shows "Service Status: Disconnected ❌"
|
||||
**Solutions**:
|
||||
- Verify Windows service is running
|
||||
- Check firewall settings (port 8765 must be open)
|
||||
- Reload the Chrome extension
|
||||
- Restart Chrome browser
|
||||
|
||||
#### 3. Firewall Blocking Connection
|
||||
**Symptoms**: Service runs but web page can't connect
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Add firewall rule manually
|
||||
netsh advfirewall firewall add rule name="Quality Recticel Print Service" dir=in action=allow protocol=TCP localport=8765
|
||||
|
||||
# Or disable Windows Firewall temporarily to test
|
||||
```
|
||||
|
||||
#### 4. Permission Denied Errors
|
||||
**Symptoms**: Installation fails with permission errors
|
||||
**Solutions**:
|
||||
- Ensure running as Administrator
|
||||
- Check Windows UAC settings
|
||||
- Verify Python installation permissions
|
||||
|
||||
#### 5. Print Jobs Not Processing
|
||||
**Symptoms**: API accepts requests but nothing prints
|
||||
**Solutions**:
|
||||
- Check default printer configuration
|
||||
- Verify printer drivers are installed
|
||||
- Test manual printing from other applications
|
||||
- Check Windows Print Spooler service
|
||||
|
||||
### Log Files
|
||||
|
||||
Check these locations for troubleshooting:
|
||||
|
||||
| Component | Log Location |
|
||||
|-----------|--------------|
|
||||
| **Windows Service** | `print_service.log` (same folder as service) |
|
||||
| **Chrome Extension** | Chrome DevTools → Extensions → Background page |
|
||||
| **Windows Event Log** | Event Viewer → Windows Logs → System |
|
||||
|
||||
### Diagnostic Commands
|
||||
|
||||
```bash
|
||||
# Check service status
|
||||
python service_manager.py status
|
||||
|
||||
# Test API manually
|
||||
curl http://localhost:8765/health
|
||||
|
||||
# List available printers
|
||||
curl http://localhost:8765/printers
|
||||
|
||||
# Check Windows service
|
||||
sc query QualityRecticelPrintService
|
||||
|
||||
# Check listening ports
|
||||
netstat -an | findstr :8765
|
||||
```
|
||||
|
||||
## 🔄 Maintenance
|
||||
|
||||
### Updating the Service
|
||||
|
||||
1. Stop the current service:
|
||||
```bash
|
||||
python service_manager.py stop
|
||||
```
|
||||
|
||||
2. Replace service files with new versions
|
||||
|
||||
3. Restart the service:
|
||||
```bash
|
||||
python service_manager.py start
|
||||
```
|
||||
|
||||
### Uninstalling
|
||||
|
||||
#### Remove Chrome Extension
|
||||
1. Go to `chrome://extensions/`
|
||||
2. Find "Quality Recticel Print Service"
|
||||
3. Click "Remove"
|
||||
|
||||
#### Remove Windows Service
|
||||
```bash
|
||||
# Run as Administrator
|
||||
python service_manager.py uninstall
|
||||
```
|
||||
|
||||
#### Remove Firewall Rule
|
||||
```bash
|
||||
netsh advfirewall firewall delete rule name="Quality Recticel Print Service"
|
||||
```
|
||||
|
||||
## 📞 Support Information
|
||||
|
||||
### API Endpoints Reference
|
||||
|
||||
| Endpoint | Method | Purpose |
|
||||
|----------|--------|---------|
|
||||
| `/health` | GET | Service health check |
|
||||
| `/printers` | GET | List available printers |
|
||||
| `/print/pdf` | POST | Print PDF from URL |
|
||||
| `/print/silent` | POST | Silent print with metadata |
|
||||
|
||||
### Request Examples
|
||||
|
||||
**Silent Print Request**:
|
||||
```json
|
||||
POST /print/silent
|
||||
{
|
||||
"pdf_url": "http://localhost:5000/generate_labels_pdf/123",
|
||||
"printer_name": "default",
|
||||
"copies": 1,
|
||||
"silent": true,
|
||||
"order_id": "123",
|
||||
"quantity": "10"
|
||||
}
|
||||
```
|
||||
|
||||
**Expected Response**:
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Print job sent successfully",
|
||||
"job_id": "print_20250921_103000",
|
||||
"printer": "HP LaserJet Pro",
|
||||
"timestamp": "2025-09-21T10:30:00"
|
||||
}
|
||||
```
|
||||
|
||||
## 📚 Technical Details
|
||||
|
||||
### Service Architecture
|
||||
- **Framework**: Flask (Python)
|
||||
- **Service Type**: Windows Service (pywin32)
|
||||
- **Communication**: HTTP REST API + Native Messaging
|
||||
- **Security**: Localhost binding only (127.0.0.1:8765)
|
||||
|
||||
### Chrome Extension Architecture
|
||||
- **Manifest Version**: 3
|
||||
- **Service Worker**: Handles background print requests
|
||||
- **Content Script**: Integrates with Quality Recticel web pages
|
||||
- **Native Messaging**: Communicates with Windows service
|
||||
|
||||
### Security Considerations
|
||||
- Service only accepts local connections (localhost)
|
||||
- No external network access required
|
||||
- Chrome extension runs in sandboxed environment
|
||||
- Windows service runs with system privileges (required for printing)
|
||||
|
||||
---
|
||||
|
||||
## 📋 Quick Start Checklist
|
||||
|
||||
- [ ] Download `windows_print_service` folder
|
||||
- [ ] Right-click `install_service.bat` → "Run as administrator"
|
||||
- [ ] Install Chrome extension from `chrome_extension` folder
|
||||
- [ ] Verify service at `http://localhost:8765/health`
|
||||
- [ ] Test printing from Quality Recticel web application
|
||||
|
||||
**Installation Time**: ~5 minutes
|
||||
**User Training Required**: Minimal (automatic detection and fallback)
|
||||
**Maintenance**: Zero (auto-starts with Windows)
|
||||
|
||||
For additional support, check the log files and diagnostic commands listed above.
|
||||
@@ -1,69 +0,0 @@
|
||||
# 🚀 Quality Recticel Print Service - Quick Setup
|
||||
|
||||
## 📦 What You Get
|
||||
- **Silent PDF Printing** - No more manual downloads!
|
||||
- **Automatic Detection** - Smart fallback when service unavailable
|
||||
- **Zero Configuration** - Works out of the box
|
||||
|
||||
## ⚡ 2-Minute Installation
|
||||
|
||||
### Step 1: Install Windows Service
|
||||
1. **Right-click** `install_service.bat`
|
||||
2. Select **"Run as administrator"**
|
||||
3. Click **"Yes"** and wait for completion
|
||||
|
||||
### Step 2: Install Chrome Extension
|
||||
1. Open Chrome → `chrome://extensions/`
|
||||
2. Enable **"Developer mode"**
|
||||
3. Click **"Load unpacked"** → Select `chrome_extension` folder
|
||||
|
||||
### Step 3: Verify Installation
|
||||
- Visit: `http://localhost:8765/health`
|
||||
- Should see: `{"status": "healthy"}`
|
||||
|
||||
## 🎯 How It Works
|
||||
|
||||
| Service Status | Button Appearance | What Happens |
|
||||
|---------------|-------------------|--------------|
|
||||
| **Running** ✅ | 🖨️ **Print Labels (Silent)** (Green) | Direct printing |
|
||||
| **Not Running** ❌ | 📄 **Generate PDF** (Blue) | PDF download |
|
||||
|
||||
## ⚠️ Troubleshooting
|
||||
|
||||
| Problem | Solution |
|
||||
|---------|----------|
|
||||
| **Service won't start** | Run `install_service.bat` as Administrator |
|
||||
| **Chrome extension not working** | Reload extension in `chrome://extensions/` |
|
||||
| **Can't connect to localhost:8765** | Check Windows Firewall (port 8765) |
|
||||
| **Nothing prints** | Verify default printer is set up |
|
||||
|
||||
## 🔧 Management Commands
|
||||
|
||||
```bash
|
||||
# Check service status
|
||||
python service_manager.py status
|
||||
|
||||
# Restart service
|
||||
python service_manager.py restart
|
||||
|
||||
# Uninstall service
|
||||
python service_manager.py uninstall
|
||||
```
|
||||
|
||||
## 📍 Important Notes
|
||||
|
||||
- ⚡ **Auto-starts** with Windows - no manual intervention needed
|
||||
- 🔒 **Local only** - service only accessible from same computer
|
||||
- 🖨️ **Uses default printer** - configure your default printer in Windows
|
||||
- 💾 **Don't move files** after installation - keep folder in same location
|
||||
|
||||
## 🆘 Quick Support
|
||||
|
||||
**Service API**: `http://localhost:8765`
|
||||
**Health Check**: `http://localhost:8765/health`
|
||||
**Printer List**: `http://localhost:8765/printers`
|
||||
|
||||
**Log File**: `print_service.log` (same folder as installation)
|
||||
|
||||
---
|
||||
*Installation takes ~5 minutes • Zero maintenance required • Works with existing Quality Recticel web application*
|
||||
@@ -1,348 +0,0 @@
|
||||
# Quality Recticel Windows Print Service
|
||||
|
||||
## 🏗️ Technical Architecture
|
||||
|
||||
Local Windows service providing REST API for silent PDF printing via Chrome extension integration.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Quality Recticel Web App │
|
||||
│ (print_module.html) │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│ HTTP Request
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Windows Print Service │
|
||||
│ (localhost:8765) │
|
||||
│ ┌─────────────┐ ┌──────────────┐ ┌─────────────────┐ │
|
||||
│ │ Flask │ │ CORS │ │ PDF Handler │ │
|
||||
│ │ Server │ │ Support │ │ │ │
|
||||
│ └─────────────┘ └──────────────┘ └─────────────────┘ │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│ Native Messaging
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Chrome Extension │
|
||||
│ ┌─────────────┐ ┌──────────────┐ ┌─────────────────┐ │
|
||||
│ │ Background │ │ Content │ │ Popup │ │
|
||||
│ │ Service │ │ Script │ │ UI │ │
|
||||
│ │ Worker │ │ │ │ │ │
|
||||
│ └─────────────┘ └──────────────┘ └─────────────────┘ │
|
||||
└─────────────────────┬───────────────────────────────────────┘
|
||||
│ Windows API
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Windows Print System │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
```
|
||||
windows_print_service/
|
||||
├── 📄 print_service.py # Main Flask service
|
||||
├── 📄 service_manager.py # Windows service wrapper
|
||||
├── 📄 install_service.bat # Installation script
|
||||
├── 📄 INSTALLATION_GUIDE.md # Complete documentation
|
||||
├── 📄 QUICK_SETUP.md # User quick reference
|
||||
├── 📄 README.md # This file
|
||||
└── 📁 chrome_extension/ # Chrome extension
|
||||
├── 📄 manifest.json # Extension manifest v3
|
||||
├── 📄 background.js # Service worker
|
||||
├── 📄 content.js # Page content integration
|
||||
├── 📄 popup.html # Extension popup UI
|
||||
├── 📄 popup.js # Popup functionality
|
||||
└── 📁 icons/ # Extension icons
|
||||
```
|
||||
|
||||
## 🚀 API Endpoints
|
||||
|
||||
### Base URL: `http://localhost:8765`
|
||||
|
||||
| Endpoint | Method | Description | Request Body | Response |
|
||||
|----------|--------|-------------|--------------|----------|
|
||||
| `/health` | GET | Service health check | None | `{"status": "healthy", ...}` |
|
||||
| `/printers` | GET | List available printers | None | `{"printers": [...]}` |
|
||||
| `/print/pdf` | POST | Print PDF from URL | `{"url": "...", "printer": "..."}` | `{"success": true, ...}` |
|
||||
| `/print/silent` | POST | Silent print with metadata | `{"pdf_url": "...", "order_id": "..."}` | `{"success": true, ...}` |
|
||||
|
||||
### Example API Usage
|
||||
|
||||
```javascript
|
||||
// Health Check
|
||||
const health = await fetch('http://localhost:8765/health');
|
||||
const status = await health.json();
|
||||
|
||||
// Silent Print
|
||||
const printRequest = {
|
||||
pdf_url: 'http://localhost:5000/generate_labels_pdf/123',
|
||||
printer_name: 'default',
|
||||
copies: 1,
|
||||
silent: true,
|
||||
order_id: '123',
|
||||
quantity: '10'
|
||||
};
|
||||
|
||||
const response = await fetch('http://localhost:8765/print/silent', {
|
||||
method: 'POST',
|
||||
headers: {'Content-Type': 'application/json'},
|
||||
body: JSON.stringify(printRequest)
|
||||
});
|
||||
```
|
||||
|
||||
## 🔧 Development Setup
|
||||
|
||||
### Prerequisites
|
||||
- Python 3.8+
|
||||
- Windows 10/11
|
||||
- Chrome Browser
|
||||
- Administrator privileges
|
||||
|
||||
### Local Development
|
||||
|
||||
```bash
|
||||
# Clone/download the project
|
||||
cd windows_print_service
|
||||
|
||||
# Install dependencies
|
||||
pip install flask flask-cors requests pywin32
|
||||
|
||||
# Run development server (not as service)
|
||||
python print_service.py
|
||||
|
||||
# Install as Windows service
|
||||
python service_manager.py install
|
||||
|
||||
# Service management
|
||||
python service_manager.py start
|
||||
python service_manager.py stop
|
||||
python service_manager.py restart
|
||||
python service_manager.py uninstall
|
||||
```
|
||||
|
||||
### Chrome Extension Development
|
||||
|
||||
```bash
|
||||
# Load extension in Chrome
|
||||
chrome://extensions/ → Developer mode ON → Load unpacked
|
||||
|
||||
# Debug extension
|
||||
chrome://extensions/ → Details → Background page (for service worker)
|
||||
chrome://extensions/ → Details → Inspect views (for popup)
|
||||
```
|
||||
|
||||
## 📋 Configuration
|
||||
|
||||
### Service Configuration (`print_service.py`)
|
||||
|
||||
```python
|
||||
class WindowsPrintService:
|
||||
def __init__(self, host='127.0.0.1', port=8765):
|
||||
self.host = host # Localhost binding only
|
||||
self.port = port # Service port
|
||||
self.app = Flask(__name__)
|
||||
```
|
||||
|
||||
### Chrome Extension Permissions (`manifest.json`)
|
||||
|
||||
```json
|
||||
{
|
||||
"permissions": [
|
||||
"printing", // Access to printer API
|
||||
"nativeMessaging", // Communication with Windows service
|
||||
"activeTab", // Current tab access
|
||||
"storage" // Extension settings storage
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## 🔄 Integration Flow
|
||||
|
||||
### 1. Service Detection
|
||||
```javascript
|
||||
// Web page detects service availability
|
||||
const isServiceAvailable = await checkServiceHealth();
|
||||
updatePrintButton(isServiceAvailable);
|
||||
```
|
||||
|
||||
### 2. Print Request Flow
|
||||
```
|
||||
User clicks print → Web app → Windows service → Chrome extension → Printer
|
||||
```
|
||||
|
||||
### 3. Fallback Mechanism
|
||||
```
|
||||
Service unavailable → Fallback to PDF download → Manual printing
|
||||
```
|
||||
|
||||
## 🛠️ Customization
|
||||
|
||||
### Adding New Print Options
|
||||
|
||||
```python
|
||||
# In print_service.py
|
||||
@app.route('/print/custom', methods=['POST'])
|
||||
def print_custom():
|
||||
data = request.json
|
||||
# Custom print logic here
|
||||
return jsonify({'success': True})
|
||||
```
|
||||
|
||||
### Modifying Chrome Extension
|
||||
|
||||
```javascript
|
||||
// In background.js - Add new message handler
|
||||
chrome.runtime.onMessage.addListener((message, sender, sendResponse) => {
|
||||
if (message.type === 'CUSTOM_PRINT') {
|
||||
// Custom print logic
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### Web Application Integration
|
||||
|
||||
```javascript
|
||||
// In print_module.html - Modify print function
|
||||
async function customPrintFunction(orderId) {
|
||||
const response = await fetch('http://localhost:8765/print/custom', {
|
||||
method: 'POST',
|
||||
body: JSON.stringify({orderId, customOptions: {...}})
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### Unit Tests (Future Enhancement)
|
||||
|
||||
```python
|
||||
# test_print_service.py
|
||||
import unittest
|
||||
from print_service import WindowsPrintService
|
||||
|
||||
class TestPrintService(unittest.TestCase):
|
||||
def test_health_endpoint(self):
|
||||
# Test implementation
|
||||
pass
|
||||
```
|
||||
|
||||
### Manual Testing Checklist
|
||||
|
||||
- [ ] Service starts automatically on Windows boot
|
||||
- [ ] API endpoints respond correctly
|
||||
- [ ] Chrome extension loads without errors
|
||||
- [ ] Print jobs execute successfully
|
||||
- [ ] Fallback works when service unavailable
|
||||
- [ ] Firewall allows port 8765 traffic
|
||||
|
||||
## 📊 Monitoring & Logging
|
||||
|
||||
### Log Files
|
||||
- **Service Log**: `print_service.log` (Flask application logs)
|
||||
- **Windows Event Log**: Windows Services logs
|
||||
- **Chrome DevTools**: Extension console logs
|
||||
|
||||
### Health Monitoring
|
||||
|
||||
```python
|
||||
# Monitor service health
|
||||
import requests
|
||||
try:
|
||||
response = requests.get('http://localhost:8765/health', timeout=5)
|
||||
if response.status_code == 200:
|
||||
print("✅ Service healthy")
|
||||
except:
|
||||
print("❌ Service unavailable")
|
||||
```
|
||||
|
||||
## 🔒 Security Considerations
|
||||
|
||||
### Network Security
|
||||
- **Localhost Only**: Service binds to 127.0.0.1 (no external access)
|
||||
- **No Authentication**: Relies on local machine security
|
||||
- **Firewall Rule**: Port 8765 opened for local connections only
|
||||
|
||||
### Chrome Extension Security
|
||||
- **Manifest V3**: Latest security standards
|
||||
- **Minimal Permissions**: Only necessary permissions requested
|
||||
- **Sandboxed**: Runs in Chrome's security sandbox
|
||||
|
||||
### Windows Service Security
|
||||
- **System Service**: Runs with appropriate Windows service privileges
|
||||
- **Print Permissions**: Requires printer access (normal for print services)
|
||||
|
||||
## 🚀 Deployment
|
||||
|
||||
### Production Deployment
|
||||
|
||||
1. **Package Distribution**:
|
||||
```bash
|
||||
# Create deployment package
|
||||
zip -r quality_recticel_print_service.zip windows_print_service/
|
||||
```
|
||||
|
||||
2. **Installation Script**: Use `install_service.bat` for end users
|
||||
|
||||
3. **Group Policy Deployment**: Deploy Chrome extension via enterprise policies
|
||||
|
||||
### Enterprise Considerations
|
||||
|
||||
- **Silent Installation**: Modify `install_service.bat` for unattended install
|
||||
- **Registry Deployment**: Pre-configure Chrome extension registry entries
|
||||
- **Network Policies**: Ensure firewall policies allow localhost:8765
|
||||
|
||||
## 📚 Dependencies
|
||||
|
||||
### Python Packages
|
||||
```
|
||||
flask>=2.3.0 # Web framework
|
||||
flask-cors>=4.0.0 # CORS support
|
||||
requests>=2.31.0 # HTTP client
|
||||
pywin32>=306 # Windows service integration
|
||||
```
|
||||
|
||||
### Chrome APIs
|
||||
- `chrome.printing.*` - Printing functionality
|
||||
- `chrome.runtime.*` - Extension messaging
|
||||
- `chrome.nativeMessaging.*` - Native app communication
|
||||
|
||||
## 🐛 Debugging
|
||||
|
||||
### Common Debug Commands
|
||||
|
||||
```bash
|
||||
# Check service status
|
||||
sc query QualityRecticelPrintService
|
||||
|
||||
# Test API manually
|
||||
curl http://localhost:8765/health
|
||||
|
||||
# Check listening ports
|
||||
netstat -an | findstr :8765
|
||||
|
||||
# View service logs
|
||||
type print_service.log
|
||||
```
|
||||
|
||||
### Chrome Extension Debugging
|
||||
|
||||
```javascript
|
||||
// In background.js - Add debug logging
|
||||
console.log('Print request received:', message);
|
||||
|
||||
// In popup.js - Test API connection
|
||||
fetch('http://localhost:8765/health')
|
||||
.then(r => r.json())
|
||||
.then(data => console.log('Service status:', data));
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📄 License & Support
|
||||
|
||||
**Project**: Quality Recticel Print Service
|
||||
**Version**: 1.0
|
||||
**Compatibility**: Windows 10/11, Chrome 88+
|
||||
**Maintenance**: Zero-maintenance after installation
|
||||
|
||||
For technical support, refer to `INSTALLATION_GUIDE.md` troubleshooting section.
|
||||
@@ -1,5 +0,0 @@
|
||||
Server Domain/IP Address: testserver.com
|
||||
Port: 3602
|
||||
Database Name: recticel
|
||||
Username: sa
|
||||
Password: 12345678
|
||||
@@ -1,121 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Script to add modules column to external database and migrate existing users
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import mariadb
|
||||
|
||||
def migrate_external_database():
|
||||
"""Add modules column to external database and update existing users"""
|
||||
try:
|
||||
# Read external database configuration from instance folder
|
||||
config_file = os.path.join(os.path.dirname(__file__), 'instance/external_server.conf')
|
||||
if not os.path.exists(config_file):
|
||||
print("External database configuration file not found at instance/external_server.conf")
|
||||
return False
|
||||
|
||||
with open(config_file, 'r') as f:
|
||||
lines = f.read().strip().split('\n')
|
||||
|
||||
# Parse the config file format "key=value"
|
||||
config = {}
|
||||
for line in lines:
|
||||
if '=' in line and not line.strip().startswith('#'):
|
||||
key, value = line.split('=', 1)
|
||||
config[key.strip()] = value.strip()
|
||||
|
||||
host = config.get('server_domain', 'localhost')
|
||||
port = int(config.get('port', '3306'))
|
||||
database = config.get('database_name', '')
|
||||
user = config.get('username', '')
|
||||
password = config.get('password', '')
|
||||
|
||||
if not all([host, database, user, password]):
|
||||
print("Missing required database configuration values.")
|
||||
return False
|
||||
|
||||
print(f"Connecting to external database: {host}:{port}/{database}")
|
||||
|
||||
# Connect to external database
|
||||
conn = mariadb.connect(
|
||||
user=user,
|
||||
password=password,
|
||||
host=host,
|
||||
port=port,
|
||||
database=database
|
||||
)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if users table exists
|
||||
cursor.execute("SHOW TABLES LIKE 'users'")
|
||||
if not cursor.fetchone():
|
||||
print("Users table not found in external database.")
|
||||
conn.close()
|
||||
return False
|
||||
|
||||
# Check if modules column already exists
|
||||
cursor.execute("DESCRIBE users")
|
||||
columns = [row[0] for row in cursor.fetchall()]
|
||||
|
||||
if 'modules' not in columns:
|
||||
print("Adding modules column to users table...")
|
||||
cursor.execute("ALTER TABLE users ADD COLUMN modules TEXT")
|
||||
print("Modules column added successfully.")
|
||||
else:
|
||||
print("Modules column already exists.")
|
||||
|
||||
# Get current users and convert their roles
|
||||
cursor.execute("SELECT id, username, role FROM users")
|
||||
users = cursor.fetchall()
|
||||
|
||||
role_mapping = {
|
||||
'superadmin': ('superadmin', None),
|
||||
'administrator': ('admin', None),
|
||||
'admin': ('admin', None),
|
||||
'quality': ('manager', '["quality"]'),
|
||||
'warehouse': ('manager', '["warehouse"]'),
|
||||
'warehouse_manager': ('manager', '["warehouse"]'),
|
||||
'scan': ('worker', '["quality"]'),
|
||||
'etichete': ('manager', '["labels"]'),
|
||||
'quality_manager': ('manager', '["quality"]'),
|
||||
'quality_worker': ('worker', '["quality"]'),
|
||||
}
|
||||
|
||||
print(f"Migrating {len(users)} users...")
|
||||
|
||||
for user_id, username, old_role in users:
|
||||
if old_role in role_mapping:
|
||||
new_role, modules_json = role_mapping[old_role]
|
||||
|
||||
cursor.execute("UPDATE users SET role = ?, modules = ? WHERE id = ?",
|
||||
(new_role, modules_json, user_id))
|
||||
|
||||
print(f" {username}: {old_role} -> {new_role} with modules {modules_json}")
|
||||
else:
|
||||
print(f" {username}: Unknown role '{old_role}', keeping as-is")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
print("External database migration completed successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error migrating external database: {e}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("External Database Migration for Simplified 4-Tier Permission System")
|
||||
print("=" * 70)
|
||||
|
||||
success = migrate_external_database()
|
||||
|
||||
if success:
|
||||
print("\n✅ Migration completed successfully!")
|
||||
print("\nUsers can now log in with the new simplified permission system.")
|
||||
print("Role structure: superadmin → admin → manager → worker")
|
||||
print("Modules: quality, warehouse, labels")
|
||||
else:
|
||||
print("\n❌ Migration failed. Please check the error messages above.")
|
||||
@@ -1,172 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Migration script to convert from complex permission system to simplified 4-tier system
|
||||
This script will:
|
||||
1. Add 'modules' column to users table
|
||||
2. Convert existing roles to new 4-tier system
|
||||
3. Assign appropriate modules based on old roles
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Add the app directory to Python path
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
|
||||
|
||||
def get_db_connections():
|
||||
"""Get both internal SQLite and external database connections"""
|
||||
connections = {}
|
||||
|
||||
# Internal SQLite database
|
||||
internal_db_path = os.path.join(os.path.dirname(__file__), 'instance/users.db')
|
||||
if os.path.exists(internal_db_path):
|
||||
connections['internal'] = sqlite3.connect(internal_db_path)
|
||||
print(f"Connected to internal SQLite database: {internal_db_path}")
|
||||
|
||||
# External database (try to connect using existing method)
|
||||
try:
|
||||
import mariadb
|
||||
|
||||
# Read external database configuration
|
||||
config_file = os.path.join(os.path.dirname(__file__), '../external_database_settings')
|
||||
if os.path.exists(config_file):
|
||||
with open(config_file, 'r') as f:
|
||||
lines = f.read().strip().split('\n')
|
||||
if len(lines) >= 5:
|
||||
host = lines[0].strip()
|
||||
port = int(lines[1].strip())
|
||||
database = lines[2].strip()
|
||||
user = lines[3].strip()
|
||||
password = lines[4].strip()
|
||||
|
||||
conn = mariadb.connect(
|
||||
user=user,
|
||||
password=password,
|
||||
host=host,
|
||||
port=port,
|
||||
database=database
|
||||
)
|
||||
connections['external'] = conn
|
||||
print(f"Connected to external MariaDB database: {host}:{port}/{database}")
|
||||
except Exception as e:
|
||||
print(f"Could not connect to external database: {e}")
|
||||
|
||||
return connections
|
||||
|
||||
def role_mapping():
|
||||
"""Map old roles to new 4-tier system"""
|
||||
return {
|
||||
# Old role -> (new_role, modules)
|
||||
'superadmin': ('superadmin', []), # All modules by default
|
||||
'administrator': ('admin', []), # All modules by default
|
||||
'admin': ('admin', []), # All modules by default
|
||||
'quality': ('manager', ['quality']),
|
||||
'warehouse': ('manager', ['warehouse']),
|
||||
'warehouse_manager': ('manager', ['warehouse']),
|
||||
'scan': ('worker', ['quality']), # Assume scan users are quality workers
|
||||
'etichete': ('manager', ['labels']),
|
||||
'quality_manager': ('manager', ['quality']),
|
||||
'quality_worker': ('worker', ['quality']),
|
||||
}
|
||||
|
||||
def migrate_database(conn, db_type):
|
||||
"""Migrate a specific database"""
|
||||
cursor = conn.cursor()
|
||||
|
||||
print(f"Migrating {db_type} database...")
|
||||
|
||||
# Check if users table exists
|
||||
if db_type == 'internal':
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='users'")
|
||||
else: # external/MariaDB
|
||||
cursor.execute("SHOW TABLES LIKE 'users'")
|
||||
|
||||
if not cursor.fetchone():
|
||||
print(f"No users table found in {db_type} database")
|
||||
return
|
||||
|
||||
# Check if modules column already exists
|
||||
try:
|
||||
if db_type == 'internal':
|
||||
cursor.execute("PRAGMA table_info(users)")
|
||||
columns = [row[1] for row in cursor.fetchall()]
|
||||
else: # external/MariaDB
|
||||
cursor.execute("DESCRIBE users")
|
||||
columns = [row[0] for row in cursor.fetchall()]
|
||||
|
||||
if 'modules' not in columns:
|
||||
print(f"Adding modules column to {db_type} database...")
|
||||
if db_type == 'internal':
|
||||
cursor.execute("ALTER TABLE users ADD COLUMN modules TEXT")
|
||||
else: # external/MariaDB
|
||||
cursor.execute("ALTER TABLE users ADD COLUMN modules TEXT")
|
||||
else:
|
||||
print(f"Modules column already exists in {db_type} database")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error checking/adding modules column in {db_type}: {e}")
|
||||
return
|
||||
|
||||
# Get current users
|
||||
cursor.execute("SELECT id, username, role FROM users")
|
||||
users = cursor.fetchall()
|
||||
|
||||
print(f"Found {len(users)} users in {db_type} database")
|
||||
|
||||
# Convert roles and assign modules
|
||||
mapping = role_mapping()
|
||||
updates = []
|
||||
|
||||
for user_id, username, old_role in users:
|
||||
if old_role in mapping:
|
||||
new_role, modules = mapping[old_role]
|
||||
modules_json = json.dumps(modules) if modules else None
|
||||
updates.append((new_role, modules_json, user_id, username))
|
||||
print(f" {username}: {old_role} -> {new_role} with modules {modules}")
|
||||
else:
|
||||
print(f" {username}: Unknown role '{old_role}', keeping as-is")
|
||||
|
||||
# Apply updates
|
||||
for new_role, modules_json, user_id, username in updates:
|
||||
try:
|
||||
cursor.execute("UPDATE users SET role = ?, modules = ? WHERE id = ?",
|
||||
(new_role, modules_json, user_id))
|
||||
print(f" Updated {username} successfully")
|
||||
except Exception as e:
|
||||
print(f" Error updating {username}: {e}")
|
||||
|
||||
conn.commit()
|
||||
print(f"Migration completed for {db_type} database")
|
||||
|
||||
def main():
|
||||
"""Main migration function"""
|
||||
print("Starting migration to simplified 4-tier permission system...")
|
||||
print("="*60)
|
||||
|
||||
connections = get_db_connections()
|
||||
|
||||
if not connections:
|
||||
print("No database connections available. Please check your configuration.")
|
||||
return
|
||||
|
||||
for db_type, conn in connections.items():
|
||||
try:
|
||||
migrate_database(conn, db_type)
|
||||
print()
|
||||
except Exception as e:
|
||||
print(f"Error migrating {db_type} database: {e}")
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
print("Migration completed!")
|
||||
print("\nNew role structure:")
|
||||
print("- superadmin: Full system access")
|
||||
print("- admin: Full app access (except role_permissions and download_extension)")
|
||||
print("- manager: Module-based access (can have multiple modules)")
|
||||
print("- worker: Limited module access (one module only)")
|
||||
print("\nAvailable modules: quality, warehouse, labels")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,35 +0,0 @@
|
||||
QZ TRAY LIBRARY PATCH NOTES
|
||||
===========================
|
||||
Version: 2.2.4 (patched for custom QZ Tray with pairing key authentication)
|
||||
Date: October 2, 2025
|
||||
|
||||
CHANGES MADE:
|
||||
-------------
|
||||
|
||||
1. Line ~387: Commented out certificate sending
|
||||
- Original: _qz.websocket.connection.sendData({ certificate: cert, promise: openPromise });
|
||||
- Patched: openPromise.resolve(); (resolves immediately without sending certificate)
|
||||
|
||||
2. Line ~391-403: Bypassed certificate retrieval
|
||||
- Original: Called _qz.security.callCert() to get certificate from user
|
||||
- Patched: Directly calls sendCert(null) without trying to get certificate
|
||||
|
||||
3. Comments added to indicate patches
|
||||
|
||||
REASON FOR PATCHES:
|
||||
------------------
|
||||
The custom QZ Tray server has certificate validation COMPLETELY DISABLED.
|
||||
It uses ONLY pairing key (HMAC) authentication instead of certificates.
|
||||
The original qz-tray.js library expects certificate-based authentication and
|
||||
fails when the server doesn't respond to certificate requests.
|
||||
|
||||
COMPATIBILITY:
|
||||
-------------
|
||||
- Works with custom QZ Tray server (forked version with certificate validation disabled)
|
||||
- NOT compatible with standard QZ Tray servers
|
||||
- Connects to both ws://localhost:8181 and wss://localhost:8182
|
||||
- Authentication handled by server-side pairing keys
|
||||
|
||||
BACKUP:
|
||||
-------
|
||||
Original unpatched version saved as: qz-tray.js.backup
|
||||
@@ -1,15 +0,0 @@
|
||||
[Unit]
|
||||
Description=Recticel Quality App
|
||||
After=network.target mariadb.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=ske087
|
||||
WorkingDirectory=/home/ske087/quality_recticel
|
||||
Environment=PATH=/home/ske087/quality_recticel/recticel/bin
|
||||
ExecStart=/home/ske087/quality_recticel/recticel/bin/python py_app/run.py
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
@@ -1,454 +0,0 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block title %}Role Permissions Management{% endblock %}
|
||||
|
||||
{% block head %}
|
||||
<style>
|
||||
.permissions-container {
|
||||
max-width: 1600px;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||
}
|
||||
|
||||
.permissions-table-container {
|
||||
background: white;
|
||||
border-radius: 15px;
|
||||
box-shadow: 0 8px 24px rgba(0,0,0,0.15);
|
||||
overflow: hidden;
|
||||
margin: 0 auto 30px auto;
|
||||
border: 2px solid #dee2e6;
|
||||
max-width: 100%;
|
||||
}
|
||||
|
||||
.permissions-table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
font-size: 14px;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.permissions-table thead {
|
||||
background: linear-gradient(135deg, #007bff, #0056b3);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.permissions-table th {
|
||||
padding: 15px 12px;
|
||||
text-align: left;
|
||||
font-weight: 600;
|
||||
border-bottom: 2px solid rgba(255,255,255,0.2);
|
||||
}
|
||||
|
||||
.permissions-table th:nth-child(1) { width: 15%; }
|
||||
.permissions-table th:nth-child(2) { width: 20%; }
|
||||
.permissions-table th:nth-child(3) { width: 25%; }
|
||||
.permissions-table th:nth-child(4) { width: 40%; }
|
||||
|
||||
.permission-row {
|
||||
border-bottom: 2px solid #dee2e6 !important;
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
|
||||
.permission-row:hover {
|
||||
background: linear-gradient(135deg, #e3f2fd, #f0f8ff) !important;
|
||||
transform: translateY(-1px) !important;
|
||||
box-shadow: 0 4px 12px rgba(0,123,255,0.15) !important;
|
||||
}
|
||||
|
||||
.role-cell, .module-cell, .page-cell, .functions-cell {
|
||||
padding: 15px 12px !important;
|
||||
vertical-align: top !important;
|
||||
border-right: 1px solid #f1f3f4 !important;
|
||||
}
|
||||
|
||||
.role-cell {
|
||||
border-left: 4px solid #007bff !important;
|
||||
}
|
||||
|
||||
.module-cell {
|
||||
border-left: 2px solid #28a745 !important;
|
||||
}
|
||||
|
||||
.page-cell {
|
||||
border-left: 2px solid #ffc107 !important;
|
||||
}
|
||||
|
||||
.functions-cell {
|
||||
border-left: 2px solid #dc3545 !important;
|
||||
}
|
||||
|
||||
.role-badge {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
background: #e3f2fd;
|
||||
padding: 8px 12px;
|
||||
border-radius: 20px;
|
||||
}
|
||||
|
||||
.functions-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.function-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
padding: 8px 12px;
|
||||
background: #f8f9fa;
|
||||
border-radius: 8px;
|
||||
border: 1px solid #dee2e6;
|
||||
}
|
||||
|
||||
.function-toggle {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.toggle-slider {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
width: 40px;
|
||||
height: 20px;
|
||||
background: #ccc;
|
||||
border-radius: 20px;
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
|
||||
.toggle-slider::before {
|
||||
content: '';
|
||||
position: absolute;
|
||||
top: 2px;
|
||||
left: 2px;
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
background: white;
|
||||
border-radius: 50%;
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
|
||||
input[type="checkbox"]:checked + .toggle-slider {
|
||||
background: #007bff;
|
||||
}
|
||||
|
||||
input[type="checkbox"]:checked + .toggle-slider::before {
|
||||
transform: translateX(20px);
|
||||
}
|
||||
|
||||
input[type="checkbox"] {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.function-text {
|
||||
font-size: 12px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.role-separator, .module-separator {
|
||||
background: #f8f9fa;
|
||||
border-bottom: 1px solid #dee2e6;
|
||||
}
|
||||
|
||||
.separator-line {
|
||||
padding: 12px 20px;
|
||||
font-weight: 600;
|
||||
color: #495057;
|
||||
background: linear-gradient(135deg, #e9ecef, #f8f9fa);
|
||||
}
|
||||
|
||||
.module-badge {
|
||||
padding: 8px 15px;
|
||||
background: linear-gradient(135deg, #28a745, #20c997);
|
||||
color: white;
|
||||
border-radius: 15px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.action-buttons-container {
|
||||
text-align: center;
|
||||
margin: 30px 0;
|
||||
}
|
||||
|
||||
.action-buttons {
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
gap: 20px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.btn {
|
||||
padding: 12px 24px;
|
||||
border: none;
|
||||
border-radius: 8px;
|
||||
font-weight: 600;
|
||||
cursor: pointer;
|
||||
transition: all 0.3s ease;
|
||||
text-decoration: none;
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.btn-primary {
|
||||
background: #007bff;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.btn-primary:hover {
|
||||
background: #0056b3;
|
||||
transform: translateY(-2px);
|
||||
box-shadow: 0 4px 12px rgba(0,123,255,0.3);
|
||||
}
|
||||
|
||||
.btn-secondary {
|
||||
background: #6c757d;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.btn-secondary:hover {
|
||||
background: #545b62;
|
||||
transform: translateY(-2px);
|
||||
box-shadow: 0 4px 12px rgba(108,117,125,0.3);
|
||||
}
|
||||
</style>
|
||||
{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="permissions-container">
|
||||
<div style="text-align: center; margin-bottom: 40px;">
|
||||
<h1 style="color: #2c3e50; margin-bottom: 15px; font-weight: 700; font-size: 32px;">
|
||||
🔐 Role Permissions Management
|
||||
</h1>
|
||||
<p style="color: #6c757d; font-size: 16px;">
|
||||
Configure granular access permissions for each role in the system
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<!-- 4-Column Permissions Table -->
|
||||
<div class="permissions-table-container">
|
||||
<table class="permissions-table" id="permissionsTable">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>👤 Role Name</th>
|
||||
<th>🏢 Module Name</th>
|
||||
<th>📄 Page Name</th>
|
||||
<th>⚙️ Functions & Permissions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% set current_role = '' %}
|
||||
{% set current_module = '' %}
|
||||
{% for role_name, role_data in roles.items() %}
|
||||
{% for page_key, page_data in pages.items() %}
|
||||
{% for section_key, section_data in page_data.sections.items() %}
|
||||
|
||||
<!-- Role separator row -->
|
||||
{% if current_role != role_name %}
|
||||
{% set current_role = role_name %}
|
||||
<tr class="role-separator">
|
||||
<td colspan="4">
|
||||
<div class="separator-line">
|
||||
<span>{{ role_data.display_name }} (Level {{ role_data.level }})</span>
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
{% endif %}
|
||||
|
||||
<!-- Module separator -->
|
||||
{% if current_module != page_key %}
|
||||
{% set current_module = page_key %}
|
||||
<tr class="module-separator">
|
||||
<td></td>
|
||||
<td colspan="3">
|
||||
<div style="padding: 8px 15px;">
|
||||
<span class="module-badge">{{ page_data.name }}</span>
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
{% endif %}
|
||||
|
||||
<tr class="permission-row" data-role="{{ role_name }}" data-module="{{ page_key }}">
|
||||
<td class="role-cell">
|
||||
<div class="role-badge">
|
||||
<span>👤</span>
|
||||
<span>{{ role_data.display_name }}</span>
|
||||
</div>
|
||||
</td>
|
||||
<td class="module-cell">
|
||||
<span>{{ page_data.name }}</span>
|
||||
</td>
|
||||
<td class="page-cell">
|
||||
<div style="display: flex; align-items: center; gap: 8px;">
|
||||
<span>📋</span>
|
||||
<span>{{ section_data.name }}</span>
|
||||
</div>
|
||||
</td>
|
||||
<td class="functions-cell">
|
||||
<div class="functions-grid">
|
||||
{% for action in section_data.actions %}
|
||||
{% set permission_key = page_key + '.' + section_key + '.' + action %}
|
||||
<div class="function-item" data-permission="{{ permission_key }}" data-role="{{ role_name }}">
|
||||
<label class="function-toggle">
|
||||
<input type="checkbox"
|
||||
data-role="{{ role_name }}"
|
||||
data-page="{{ page_key }}"
|
||||
data-section="{{ section_key }}"
|
||||
data-action="{{ action }}"
|
||||
onchange="togglePermission('{{ role_name }}', '{{ page_key }}', '{{ section_key }}', '{{ action }}', this)">
|
||||
<span class="toggle-slider"></span>
|
||||
</label>
|
||||
<span class="function-text">{{ action_names[action] }}</span>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
{% set current_module = '' %}
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
<!-- Action Buttons -->
|
||||
<div class="action-buttons-container">
|
||||
<div class="action-buttons">
|
||||
<button class="btn btn-secondary" onclick="resetAllToDefaults()">
|
||||
<span>🔄</span>
|
||||
Reset All to Defaults
|
||||
</button>
|
||||
<button class="btn btn-primary" onclick="saveAllPermissions()">
|
||||
<span>💾</span>
|
||||
Save All Changes
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Initialize data from backend
|
||||
let permissions = {{ permissions_json|safe }};
|
||||
let rolePermissions = {{ role_permissions_json|safe }};
|
||||
|
||||
// Toggle permission function
|
||||
function togglePermission(roleName, pageKey, sectionKey, action, checkbox) {
|
||||
const isChecked = checkbox.checked;
|
||||
const permissionKey = `${pageKey}.${sectionKey}.${action}`;
|
||||
|
||||
// Update visual state of the function item
|
||||
const functionItem = checkbox.closest('.function-item');
|
||||
if (isChecked) {
|
||||
functionItem.classList.remove('disabled');
|
||||
} else {
|
||||
functionItem.classList.add('disabled');
|
||||
}
|
||||
|
||||
// Update data structure (flat array format)
|
||||
if (!rolePermissions[roleName]) {
|
||||
rolePermissions[roleName] = [];
|
||||
}
|
||||
|
||||
if (isChecked && !rolePermissions[roleName].includes(permissionKey)) {
|
||||
rolePermissions[roleName].push(permissionKey);
|
||||
} else if (!isChecked) {
|
||||
const index = rolePermissions[roleName].indexOf(permissionKey);
|
||||
if (index > -1) {
|
||||
rolePermissions[roleName].splice(index, 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Save all permissions
|
||||
function saveAllPermissions() {
|
||||
// Convert flat permission arrays to nested structure for backend
|
||||
const structuredPermissions = {};
|
||||
|
||||
for (const [roleName, permissions] of Object.entries(rolePermissions)) {
|
||||
structuredPermissions[roleName] = {};
|
||||
|
||||
permissions.forEach(permissionKey => {
|
||||
const [pageKey, sectionKey, action] = permissionKey.split('.');
|
||||
|
||||
if (!structuredPermissions[roleName][pageKey]) {
|
||||
structuredPermissions[roleName][pageKey] = {};
|
||||
}
|
||||
if (!structuredPermissions[roleName][pageKey][sectionKey]) {
|
||||
structuredPermissions[roleName][pageKey][sectionKey] = [];
|
||||
}
|
||||
|
||||
structuredPermissions[roleName][pageKey][sectionKey].push(action);
|
||||
});
|
||||
}
|
||||
|
||||
fetch('/settings/save_all_role_permissions', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
permissions: structuredPermissions
|
||||
})
|
||||
})
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if (data.success) {
|
||||
alert('All permissions saved successfully!');
|
||||
} else {
|
||||
alert('Error saving permissions: ' + data.error);
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
alert('Error saving permissions: ' + error);
|
||||
});
|
||||
}
|
||||
|
||||
// Reset all permissions to defaults
|
||||
function resetAllToDefaults() {
|
||||
if (confirm('Are you sure you want to reset ALL role permissions to defaults? This will overwrite all current settings.')) {
|
||||
fetch('/settings/reset_all_role_permissions', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
})
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if (data.success) {
|
||||
location.reload();
|
||||
} else {
|
||||
alert('Error resetting permissions: ' + data.error);
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
alert('Error resetting permissions: ' + error);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize checkbox states when page loads
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
// Set initial states based on data
|
||||
document.querySelectorAll('.function-item').forEach(item => {
|
||||
const roleName = item.dataset.role;
|
||||
const permissionKey = item.dataset.permission;
|
||||
const checkbox = item.querySelector('input[type="checkbox"]');
|
||||
|
||||
// Check if this role has this permission
|
||||
const hasPermission = rolePermissions[roleName] && rolePermissions[roleName].includes(permissionKey);
|
||||
|
||||
if (hasPermission) {
|
||||
checkbox.checked = true;
|
||||
item.classList.remove('disabled');
|
||||
} else {
|
||||
checkbox.checked = false;
|
||||
item.classList.add('disabled');
|
||||
}
|
||||
});
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
@@ -1,111 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for the new simplified 4-tier permission system
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'app'))
|
||||
|
||||
from permissions_simple import check_access, validate_user_modules, get_user_accessible_pages
|
||||
|
||||
def test_permission_system():
|
||||
"""Test the new permission system with various scenarios"""
|
||||
print("Testing Simplified 4-Tier Permission System")
|
||||
print("=" * 50)
|
||||
|
||||
# Test cases: (role, modules, page, expected_result)
|
||||
test_cases = [
|
||||
# Superadmin tests
|
||||
('superadmin', [], 'dashboard', True),
|
||||
('superadmin', [], 'role_permissions', True),
|
||||
('superadmin', [], 'quality', True),
|
||||
('superadmin', [], 'warehouse', True),
|
||||
|
||||
# Admin tests
|
||||
('admin', [], 'dashboard', True),
|
||||
('admin', [], 'role_permissions', False), # Restricted for admin
|
||||
('admin', [], 'download_extension', False), # Restricted for admin
|
||||
('admin', [], 'quality', True),
|
||||
('admin', [], 'warehouse', True),
|
||||
|
||||
# Manager tests
|
||||
('manager', ['quality'], 'quality', True),
|
||||
('manager', ['quality'], 'quality_reports', True),
|
||||
('manager', ['quality'], 'warehouse', False), # No warehouse module
|
||||
('manager', ['warehouse'], 'warehouse', True),
|
||||
('manager', ['warehouse'], 'quality', False), # No quality module
|
||||
('manager', ['quality', 'warehouse'], 'quality', True), # Multiple modules
|
||||
('manager', ['quality', 'warehouse'], 'warehouse', True),
|
||||
|
||||
# Worker tests
|
||||
('worker', ['quality'], 'quality', True),
|
||||
('worker', ['quality'], 'quality_reports', False), # Workers can't access reports
|
||||
('worker', ['quality'], 'warehouse', False), # No warehouse module
|
||||
('worker', ['warehouse'], 'move_orders', True),
|
||||
('worker', ['warehouse'], 'create_locations', False), # Workers can't create locations
|
||||
|
||||
# Invalid role test
|
||||
('invalid_role', ['quality'], 'quality', False),
|
||||
]
|
||||
|
||||
print("Testing access control:")
|
||||
print("-" * 30)
|
||||
|
||||
passed = 0
|
||||
failed = 0
|
||||
|
||||
for role, modules, page, expected in test_cases:
|
||||
result = check_access(role, modules, page)
|
||||
status = "PASS" if result == expected else "FAIL"
|
||||
print(f"{status}: {role:12} {str(modules):20} {page:18} -> {result} (expected {expected})")
|
||||
|
||||
if result == expected:
|
||||
passed += 1
|
||||
else:
|
||||
failed += 1
|
||||
|
||||
print(f"\nResults: {passed} passed, {failed} failed")
|
||||
|
||||
# Test module validation
|
||||
print("\nTesting module validation:")
|
||||
print("-" * 30)
|
||||
|
||||
validation_tests = [
|
||||
('superadmin', ['quality'], True), # Superadmin can have any modules
|
||||
('admin', ['warehouse'], True), # Admin can have any modules
|
||||
('manager', ['quality'], True), # Manager can have one module
|
||||
('manager', ['quality', 'warehouse'], True), # Manager can have multiple modules
|
||||
('manager', [], False), # Manager must have at least one module
|
||||
('worker', ['quality'], True), # Worker can have one module
|
||||
('worker', ['quality', 'warehouse'], False), # Worker cannot have multiple modules
|
||||
('worker', [], False), # Worker must have exactly one module
|
||||
('invalid_role', ['quality'], False), # Invalid role
|
||||
]
|
||||
|
||||
for role, modules, expected in validation_tests:
|
||||
is_valid, error_msg = validate_user_modules(role, modules)
|
||||
status = "PASS" if is_valid == expected else "FAIL"
|
||||
print(f"{status}: {role:12} {str(modules):20} -> {is_valid} (expected {expected})")
|
||||
if error_msg:
|
||||
print(f" Error: {error_msg}")
|
||||
|
||||
# Test accessible pages for different users
|
||||
print("\nTesting accessible pages:")
|
||||
print("-" * 30)
|
||||
|
||||
user_tests = [
|
||||
('superadmin', []),
|
||||
('admin', []),
|
||||
('manager', ['quality']),
|
||||
('manager', ['warehouse']),
|
||||
('worker', ['quality']),
|
||||
('worker', ['warehouse']),
|
||||
]
|
||||
|
||||
for role, modules in user_tests:
|
||||
accessible_pages = get_user_accessible_pages(role, modules)
|
||||
print(f"{role:12} {str(modules):20} -> {len(accessible_pages)} pages: {', '.join(accessible_pages[:5])}{'...' if len(accessible_pages) > 5 else ''}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_permission_system()
|
||||
@@ -1,23 +0,0 @@
|
||||
python3 -m venv recticel
|
||||
source recticel/bin/activate
|
||||
python /home/ske087/quality_recticel/py_app/run.py
|
||||
|
||||
sudo apt install mariadb-server mariadb-client
|
||||
sudo apt-get install libmariadb-dev libmariadb-dev-compat
|
||||
|
||||
sudo mysql -u root -p
|
||||
|
||||
root password : Initaial01! acasa Matei@123
|
||||
|
||||
CREATE DATABASE trasabilitate_database;
|
||||
CREATE USER 'trasabilitate'@'localhost' IDENTIFIED BY 'Initial01!';
|
||||
GRANT ALL PRIVILEGES ON trasabilitate_database.* TO 'trasabilitate'@'localhost';
|
||||
FLUSH PRIVILEGES;
|
||||
EXIT
|
||||
|
||||
|
||||
Server Domain/IP Address: testserver.com
|
||||
Port: 3602
|
||||
Database Name: recticel
|
||||
Username: sa
|
||||
Password: 12345678
|
||||
@@ -1,32 +0,0 @@
|
||||
|
||||
# Steps to Prepare Environment for Installing Python Requirements
|
||||
|
||||
1. Change ownership of the project directory (if needed):
|
||||
sudo chown -R $USER:$USER /home/ske087/quality_recticel
|
||||
|
||||
2. Install Python venv module:
|
||||
sudo apt install -y python3-venv
|
||||
|
||||
3. Create and activate the virtual environment:
|
||||
python3 -m venv recticel
|
||||
source recticel/bin/activate
|
||||
|
||||
4. Install MariaDB server and development libraries:
|
||||
sudo apt install -y mariadb-server libmariadb-dev
|
||||
|
||||
5. Create MariaDB database and user:
|
||||
sudo mysql -e "CREATE DATABASE trasabilitate; CREATE USER 'sa'@'localhost' IDENTIFIED BY 'qasdewrftgbcgfdsrytkmbf\"b'; GRANT ALL PRIVILEGES ON quality.* TO 'sa'@'localhost'; FLUSH PRIVILEGES;"
|
||||
sa
|
||||
qasdewrftgbcgfdsrytkmbf\"b
|
||||
|
||||
trasabilitate
|
||||
Initial01!
|
||||
|
||||
6. Install build tools (for compiling Python packages):
|
||||
sudo apt install -y build-essential
|
||||
|
||||
7. Install Python development headers:
|
||||
sudo apt install -y python3-dev
|
||||
|
||||
8. Install Python requirements:
|
||||
pip install -r py_app/requirements.txt
|
||||
60
old code/tray/.github/workflows/build.yaml
vendored
60
old code/tray/.github/workflows/build.yaml
vendored
@@ -1,60 +0,0 @@
|
||||
name: build
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
ubuntu:
|
||||
runs-on: [ubuntu-latest]
|
||||
strategy:
|
||||
matrix:
|
||||
java: [11, 21]
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-java@v3
|
||||
with:
|
||||
java-version: ${{ matrix.java }}
|
||||
distribution: 'liberica'
|
||||
- run: sudo apt-get install nsis makeself
|
||||
- run: ant makeself
|
||||
- run: sudo out/qz-tray-*.run
|
||||
- run: /opt/qz-tray/qz-tray --version
|
||||
- run: ant nsis
|
||||
|
||||
macos:
|
||||
runs-on: [macos-latest]
|
||||
strategy:
|
||||
matrix:
|
||||
java: [11, 21]
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-java@v3
|
||||
with:
|
||||
java-version: ${{ matrix.java }}
|
||||
distribution: 'liberica'
|
||||
- run: brew install nsis makeself
|
||||
- run: ant pkgbuild
|
||||
- run: echo "Setting CA trust settings to 'allow' (https://github.com/actions/runner-images/issues/4519)"
|
||||
- run: security authorizationdb read com.apple.trust-settings.admin > /tmp/trust-settings-backup.xml
|
||||
- run: sudo security authorizationdb write com.apple.trust-settings.admin allow
|
||||
- run: sudo installer -pkg out/qz-tray-*.pkg -target /
|
||||
- run: echo "Restoring CA trust settings back to default"
|
||||
- run: sudo security authorizationdb write com.apple.trust-settings.admin < /tmp/trust-settings-backup.xml
|
||||
- run: "'/Applications/QZ Tray.app/Contents/MacOS/QZ Tray' --version"
|
||||
- run: ant makeself
|
||||
- run: ant nsis
|
||||
|
||||
windows:
|
||||
runs-on: [windows-latest]
|
||||
strategy:
|
||||
matrix:
|
||||
java: [11, 21]
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-java@v3
|
||||
with:
|
||||
java-version: ${{ matrix.java }}
|
||||
distribution: 'liberica'
|
||||
- run: choco install nsis
|
||||
- run: ant nsis
|
||||
- run: Start-Process -Wait ./out/qz-tray-*.exe -ArgumentList "/S"
|
||||
- run: "&'C:/Program Files/QZ Tray/qz-tray.exe' --wait --version|Out-Null"
|
||||
33
old code/tray/.gitignore
vendored
33
old code/tray/.gitignore
vendored
@@ -1,33 +0,0 @@
|
||||
# Build outputs
|
||||
/out/
|
||||
*.class
|
||||
|
||||
# Node modules
|
||||
/js/node_modules
|
||||
|
||||
# JavaFX runtime (too large, should be downloaded)
|
||||
/lib/javafx*
|
||||
|
||||
# IDE files
|
||||
/.idea/workspace.xml
|
||||
/.idea/misc.xml
|
||||
/.idea/uiDesigner.xml
|
||||
/.idea/compiler.xml
|
||||
.idea/
|
||||
*.iml
|
||||
.vscode/
|
||||
|
||||
# OS files
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
windows-debug-launcher.nsi.in
|
||||
|
||||
# Build artifacts
|
||||
/fx.zip
|
||||
/provision.json
|
||||
|
||||
# Private keys
|
||||
/ant/private/qz.ks
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
@@ -1,14 +0,0 @@
|
||||
FROM openjdk:11 as build
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y ant nsis makeself
|
||||
COPY . /usr/src/tray
|
||||
WORKDIR /usr/src/tray
|
||||
RUN ant makeself
|
||||
|
||||
FROM openjdk:11-jre as install
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y libglib2.0-bin
|
||||
COPY --from=build /usr/src/tray/out/*.run /tmp
|
||||
RUN find /tmp -iname "*.run" -exec {} \;
|
||||
WORKDIR /opt/qz-tray
|
||||
ENTRYPOINT ["/opt/qz-tray/qz-tray"]
|
||||
@@ -1,601 +0,0 @@
|
||||
ATTRIBUTION, LICENSING AND SUMMARY OF COMPONENTS
|
||||
Version 1.2, February 2016
|
||||
|
||||
Project Source Code (unless otherwise specified):
|
||||
Copyright (c) 2013-2016 QZ Industries, LLC
|
||||
LGPL-2.1 License (attached)
|
||||
https://qz.io
|
||||
|
||||
All API Examples (unless otherwise specified):
|
||||
Covers: JavaScript examples, Wiki API Examples, Signing API Examples
|
||||
Public Domain (no restrictions)
|
||||
______________________________________________________________________
|
||||
|
||||
Other licenses:
|
||||
|
||||
jOOR Reflection Library (As-Is, No Modifications)
|
||||
Copyright (c) 2011-2012, Lukas Eder, lukas.eder@gmail.com
|
||||
Apache License, Version 2.0 (attached), with Copyright Notice
|
||||
https://github.com/jOOQ/jOOR
|
||||
|
||||
|
||||
jetty Web Server Library (As-Is, No Modifications)
|
||||
Copyright (c) 1995-2014 Eclipse Foundation
|
||||
Apache License, Version 2.0 (attached), with Copyright Notice
|
||||
http://eclipse.org/jetty/
|
||||
|
||||
|
||||
Apache log4j (As-Is, No Modifications)
|
||||
Copyright (C) 1999-2005 The Apache Software Foundation
|
||||
Apache License, Version 2.0 (attached), with Copyright Notice
|
||||
https://logging.apache.org/
|
||||
|
||||
|
||||
Apache PDFBox (As-Is, No Modifications)
|
||||
Copyright (C) 2009–2015 The Apache Software Foundation
|
||||
Apache License, Version 2.0 (attached), with Copyright Notice
|
||||
https://pdfbox.apache.org/
|
||||
|
||||
|
||||
jSSC Library (As-Is, No Modifications)
|
||||
Copyright (c) 2010-2013 Alexey Sokolov (scream3r)
|
||||
LGPL-2.1 License (attached), with Copyright notice
|
||||
https://code.google.com/p/java-simple-serial-connector/
|
||||
|
||||
|
||||
hid4java (As-Is, No Modifications)
|
||||
Copyright (c) 2014 Gary Rowe
|
||||
MIT License (attached), with Copyright notice
|
||||
https://github.com/gary-rowe/hid4java
|
||||
|
||||
|
||||
jsemver (As-Is, No Modifications)
|
||||
Copyright 2012-2014 Zafar Khaja <zafarkhaja@gmail.com>
|
||||
MIT License (attached), with Copyright notice
|
||||
https://github.com/zafarkhaja/jsemver
|
||||
______________________________________________________________________
|
||||
|
||||
|
||||
LGPL 2.1
|
||||
Applies ONLY to: qz-tray, jssc
|
||||
|
||||
|
||||
GNU LESSER GENERAL PUBLIC LICENSE
|
||||
Version 2.1, February 1999
|
||||
|
||||
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
[This is the first released version of the Lesser GPL. It also counts
|
||||
as the successor of the GNU Library Public License, version 2, hence
|
||||
the version number 2.1.]
|
||||
|
||||
Preamble
|
||||
|
||||
The licenses for most software are designed to take away your
|
||||
freedom to share and change it. By contrast, the GNU General Public
|
||||
Licenses are intended to guarantee your freedom to share and change
|
||||
free software--to make sure the software is free for all its users.
|
||||
|
||||
This license, the Lesser General Public License, applies to some
|
||||
specially designated software packages--typically libraries--of the
|
||||
Free Software Foundation and other authors who decide to use it. You
|
||||
can use it too, but we suggest you first think carefully about whether
|
||||
this license or the ordinary General Public License is the better
|
||||
strategy to use in any particular case, based on the explanations below.
|
||||
|
||||
When we speak of free software, we are referring to freedom of use,
|
||||
not price. Our General Public Licenses are designed to make sure that
|
||||
you have the freedom to distribute copies of free software (and charge
|
||||
for this service if you wish); that you receive source code or can get
|
||||
it if you want it; that you can change the software and use pieces of
|
||||
it in new free programs; and that you are informed that you can do
|
||||
these things.
|
||||
|
||||
To protect your rights, we need to make restrictions that forbid
|
||||
distributors to deny you these rights or to ask you to surrender these
|
||||
rights. These restrictions translate to certain responsibilities for
|
||||
you if you distribute copies of the library or if you modify it.
|
||||
|
||||
For example, if you distribute copies of the library, whether gratis
|
||||
or for a fee, you must give the recipients all the rights that we gave
|
||||
you. You must make sure that they, too, receive or can get the source
|
||||
code. If you link other code with the library, you must provide
|
||||
complete object files to the recipients, so that they can relink them
|
||||
with the library after making changes to the library and recompiling
|
||||
it. And you must show them these terms so they know their rights.
|
||||
|
||||
We protect your rights with a two-step method: (1) we copyright the
|
||||
library, and (2) we offer you this license, which gives you legal
|
||||
permission to copy, distribute and/or modify the library.
|
||||
|
||||
To protect each distributor, we want to make it very clear that
|
||||
there is no warranty for the free library. Also, if the library is
|
||||
modified by someone else and passed on, the recipients should know
|
||||
that what they have is not the original version, so that the original
|
||||
author's reputation will not be affected by problems that might be
|
||||
introduced by others.
|
||||
|
||||
Finally, software patents pose a constant threat to the existence of
|
||||
any free program. We wish to make sure that a company cannot
|
||||
effectively restrict the users of a free program by obtaining a
|
||||
restrictive license from a patent holder. Therefore, we insist that
|
||||
any patent license obtained for a version of the library must be
|
||||
consistent with the full freedom of use specified in this license.
|
||||
|
||||
Most GNU software, including some libraries, is covered by the
|
||||
ordinary GNU General Public License. This license, the GNU Lesser
|
||||
General Public License, applies to certain designated libraries, and
|
||||
is quite different from the ordinary General Public License. We use
|
||||
this license for certain libraries in order to permit linking those
|
||||
libraries into non-free programs.
|
||||
|
||||
When a program is linked with a library, whether statically or using
|
||||
a shared library, the combination of the two is legally speaking a
|
||||
combined work, a derivative of the original library. The ordinary
|
||||
General Public License therefore permits such linking only if the
|
||||
entire combination fits its criteria of freedom. The Lesser General
|
||||
Public License permits more lax criteria for linking other code with
|
||||
the library.
|
||||
|
||||
We call this license the "Lesser" General Public License because it
|
||||
does Less to protect the user's freedom than the ordinary General
|
||||
Public License. It also provides other free software developers Less
|
||||
of an advantage over competing non-free programs. These disadvantages
|
||||
are the reason we use the ordinary General Public License for many
|
||||
libraries. However, the Lesser license provides advantages in certain
|
||||
special circumstances.
|
||||
|
||||
For example, on rare occasions, there may be a special need to
|
||||
encourage the widest possible use of a certain library, so that it becomes
|
||||
a de-facto standard. To achieve this, non-free programs must be
|
||||
allowed to use the library. A more frequent case is that a free
|
||||
library does the same job as widely used non-free libraries. In this
|
||||
case, there is little to gain by limiting the free library to free
|
||||
software only, so we use the Lesser General Public License.
|
||||
|
||||
In other cases, permission to use a particular library in non-free
|
||||
programs enables a greater number of people to use a large body of
|
||||
free software. For example, permission to use the GNU C Library in
|
||||
non-free programs enables many more people to use the whole GNU
|
||||
operating system, as well as its variant, the GNU/Linux operating
|
||||
system.
|
||||
|
||||
Although the Lesser General Public License is Less protective of the
|
||||
users' freedom, it does ensure that the user of a program that is
|
||||
linked with the Library has the freedom and the wherewithal to run
|
||||
that program using a modified version of the Library.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow. Pay close attention to the difference between a
|
||||
"work based on the library" and a "work that uses the library". The
|
||||
former contains code derived from the library, whereas the latter must
|
||||
be combined with the library in order to run.
|
||||
|
||||
GNU LESSER GENERAL PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. This License Agreement applies to any software library or other
|
||||
program which contains a notice placed by the copyright holder or
|
||||
other authorized party saying it may be distributed under the terms of
|
||||
this Lesser General Public License (also called "this License").
|
||||
Each licensee is addressed as "you".
|
||||
|
||||
A "library" means a collection of software functions and/or data
|
||||
prepared so as to be conveniently linked with application programs
|
||||
(which use some of those functions and data) to form executables.
|
||||
|
||||
The "Library", below, refers to any such software library or work
|
||||
which has been distributed under these terms. A "work based on the
|
||||
Library" means either the Library or any derivative work under
|
||||
copyright law: that is to say, a work containing the Library or a
|
||||
portion of it, either verbatim or with modifications and/or translated
|
||||
straightforwardly into another language. (Hereinafter, translation is
|
||||
included without limitation in the term "modification".)
|
||||
|
||||
"Source code" for a work means the preferred form of the work for
|
||||
making modifications to it. For a library, complete source code means
|
||||
all the source code for all modules it contains, plus any associated
|
||||
interface definition files, plus the scripts used to control compilation
|
||||
and installation of the library.
|
||||
|
||||
Activities other than copying, distribution and modification are not
|
||||
covered by this License; they are outside its scope. The act of
|
||||
running a program using the Library is not restricted, and output from
|
||||
such a program is covered only if its contents constitute a work based
|
||||
on the Library (independent of the use of the Library in a tool for
|
||||
writing it). Whether that is true depends on what the Library does
|
||||
and what the program that uses the Library does.
|
||||
|
||||
1. You may copy and distribute verbatim copies of the Library's
|
||||
complete source code as you receive it, in any medium, provided that
|
||||
you conspicuously and appropriately publish on each copy an
|
||||
appropriate copyright notice and disclaimer of warranty; keep intact
|
||||
all the notices that refer to this License and to the absence of any
|
||||
warranty; and distribute a copy of this License along with the
|
||||
Library.
|
||||
|
||||
You may charge a fee for the physical act of transferring a copy,
|
||||
and you may at your option offer warranty protection in exchange for a
|
||||
fee.
|
||||
|
||||
2. You may modify your copy or copies of the Library or any portion
|
||||
of it, thus forming a work based on the Library, and copy and
|
||||
distribute such modifications or work under the terms of Section 1
|
||||
above, provided that you also meet all of these conditions:
|
||||
|
||||
a) The modified work must itself be a software library.
|
||||
|
||||
b) You must cause the files modified to carry prominent notices
|
||||
stating that you changed the files and the date of any change.
|
||||
|
||||
c) You must cause the whole of the work to be licensed at no
|
||||
charge to all third parties under the terms of this License.
|
||||
|
||||
d) If a facility in the modified Library refers to a function or a
|
||||
table of data to be supplied by an application program that uses
|
||||
the facility, other than as an argument passed when the facility
|
||||
is invoked, then you must make a good faith effort to ensure that,
|
||||
in the event an application does not supply such function or
|
||||
table, the facility still operates, and performs whatever part of
|
||||
its purpose remains meaningful.
|
||||
|
||||
(For example, a function in a library to compute square roots has
|
||||
a purpose that is entirely well-defined independent of the
|
||||
application. Therefore, Subsection 2d requires that any
|
||||
application-supplied function or table used by this function must
|
||||
be optional: if the application does not supply it, the square
|
||||
root function must still compute square roots.)
|
||||
|
||||
These requirements apply to the modified work as a whole. If
|
||||
identifiable sections of that work are not derived from the Library,
|
||||
and can be reasonably considered independent and separate works in
|
||||
themselves, then this License, and its terms, do not apply to those
|
||||
sections when you distribute them as separate works. But when you
|
||||
distribute the same sections as part of a whole which is a work based
|
||||
on the Library, the distribution of the whole must be on the terms of
|
||||
this License, whose permissions for other licensees extend to the
|
||||
entire whole, and thus to each and every part regardless of who wrote
|
||||
it.
|
||||
|
||||
Thus, it is not the intent of this section to claim rights or contest
|
||||
your rights to work written entirely by you; rather, the intent is to
|
||||
exercise the right to control the distribution of derivative or
|
||||
collective works based on the Library.
|
||||
|
||||
In addition, mere aggregation of another work not based on the Library
|
||||
with the Library (or with a work based on the Library) on a volume of
|
||||
a storage or distribution medium does not bring the other work under
|
||||
the scope of this License.
|
||||
|
||||
3. You may opt to apply the terms of the ordinary GNU General Public
|
||||
License instead of this License to a given copy of the Library. To do
|
||||
this, you must alter all the notices that refer to this License, so
|
||||
that they refer to the ordinary GNU General Public License, version 2,
|
||||
instead of to this License. (If a newer version than version 2 of the
|
||||
ordinary GNU General Public License has appeared, then you can specify
|
||||
that version instead if you wish.) Do not make any other change in
|
||||
these notices.
|
||||
|
||||
Once this change is made in a given copy, it is irreversible for
|
||||
that copy, so the ordinary GNU General Public License applies to all
|
||||
subsequent copies and derivative works made from that copy.
|
||||
|
||||
This option is useful when you wish to copy part of the code of
|
||||
the Library into a program that is not a library.
|
||||
|
||||
4. You may copy and distribute the Library (or a portion or
|
||||
derivative of it, under Section 2) in object code or executable form
|
||||
under the terms of Sections 1 and 2 above provided that you accompany
|
||||
it with the complete corresponding machine-readable source code, which
|
||||
must be distributed under the terms of Sections 1 and 2 above on a
|
||||
medium customarily used for software interchange.
|
||||
|
||||
If distribution of object code is made by offering access to copy
|
||||
from a designated place, then offering equivalent access to copy the
|
||||
source code from the same place satisfies the requirement to
|
||||
distribute the source code, even though third parties are not
|
||||
compelled to copy the source along with the object code.
|
||||
|
||||
5. A program that contains no derivative of any portion of the
|
||||
Library, but is designed to work with the Library by being compiled or
|
||||
linked with it, is called a "work that uses the Library". Such a
|
||||
work, in isolation, is not a derivative work of the Library, and
|
||||
therefore falls outside the scope of this License.
|
||||
|
||||
However, linking a "work that uses the Library" with the Library
|
||||
creates an executable that is a derivative of the Library (because it
|
||||
contains portions of the Library), rather than a "work that uses the
|
||||
library". The executable is therefore covered by this License.
|
||||
Section 6 states terms for distribution of such executables.
|
||||
|
||||
When a "work that uses the Library" uses material from a header file
|
||||
that is part of the Library, the object code for the work may be a
|
||||
derivative work of the Library even though the source code is not.
|
||||
Whether this is true is especially significant if the work can be
|
||||
linked without the Library, or if the work is itself a library. The
|
||||
threshold for this to be true is not precisely defined by law.
|
||||
|
||||
If such an object file uses only numerical parameters, data
|
||||
structure layouts and accessors, and small macros and small inline
|
||||
functions (ten lines or less in length), then the use of the object
|
||||
file is unrestricted, regardless of whether it is legally a derivative
|
||||
work. (Executables containing this object code plus portions of the
|
||||
Library will still fall under Section 6.)
|
||||
|
||||
Otherwise, if the work is a derivative of the Library, you may
|
||||
distribute the object code for the work under the terms of Section 6.
|
||||
Any executables containing that work also fall under Section 6,
|
||||
whether or not they are linked directly with the Library itself.
|
||||
|
||||
6. As an exception to the Sections above, you may also combine or
|
||||
link a "work that uses the Library" with the Library to produce a
|
||||
work containing portions of the Library, and distribute that work
|
||||
under terms of your choice, provided that the terms permit
|
||||
modification of the work for the customer's own use and reverse
|
||||
engineering for debugging such modifications.
|
||||
|
||||
You must give prominent notice with each copy of the work that the
|
||||
Library is used in it and that the Library and its use are covered by
|
||||
this License. You must supply a copy of this License. If the work
|
||||
during execution displays copyright notices, you must include the
|
||||
copyright notice for the Library among them, as well as a reference
|
||||
directing the user to the copy of this License. Also, you must do one
|
||||
of these things:
|
||||
|
||||
a) Accompany the work with the complete corresponding
|
||||
machine-readable source code for the Library including whatever
|
||||
changes were used in the work (which must be distributed under
|
||||
Sections 1 and 2 above); and, if the work is an executable linked
|
||||
with the Library, with the complete machine-readable "work that
|
||||
uses the Library", as object code and/or source code, so that the
|
||||
user can modify the Library and then relink to produce a modified
|
||||
executable containing the modified Library. (It is understood
|
||||
that the user who changes the contents of definitions files in the
|
||||
Library will not necessarily be able to recompile the application
|
||||
to use the modified definitions.)
|
||||
|
||||
b) Use a suitable shared library mechanism for linking with the
|
||||
Library. A suitable mechanism is one that (1) uses at run time a
|
||||
copy of the library already present on the user's computer system,
|
||||
rather than copying library functions into the executable, and (2)
|
||||
will operate properly with a modified version of the library, if
|
||||
the user installs one, as long as the modified version is
|
||||
interface-compatible with the version that the work was made with.
|
||||
|
||||
c) Accompany the work with a written offer, valid for at
|
||||
least three years, to give the same user the materials
|
||||
specified in Subsection 6a, above, for a charge no more
|
||||
than the cost of performing this distribution.
|
||||
|
||||
d) If distribution of the work is made by offering access to copy
|
||||
from a designated place, offer equivalent access to copy the above
|
||||
specified materials from the same place.
|
||||
|
||||
e) Verify that the user has already received a copy of these
|
||||
materials or that you have already sent this user a copy.
|
||||
|
||||
For an executable, the required form of the "work that uses the
|
||||
Library" must include any data and utility programs needed for
|
||||
reproducing the executable from it. However, as a special exception,
|
||||
the materials to be distributed need not include anything that is
|
||||
normally distributed (in either source or binary form) with the major
|
||||
components (compiler, kernel, and so on) of the operating system on
|
||||
which the executable runs, unless that component itself accompanies
|
||||
the executable.
|
||||
|
||||
It may happen that this requirement contradicts the license
|
||||
restrictions of other proprietary libraries that do not normally
|
||||
accompany the operating system. Such a contradiction means you cannot
|
||||
use both them and the Library together in an executable that you
|
||||
distribute.
|
||||
|
||||
7. You may place library facilities that are a work based on the
|
||||
Library side-by-side in a single library together with other library
|
||||
facilities not covered by this License, and distribute such a combined
|
||||
library, provided that the separate distribution of the work based on
|
||||
the Library and of the other library facilities is otherwise
|
||||
permitted, and provided that you do these two things:
|
||||
|
||||
a) Accompany the combined library with a copy of the same work
|
||||
based on the Library, uncombined with any other library
|
||||
facilities. This must be distributed under the terms of the
|
||||
Sections above.
|
||||
|
||||
b) Give prominent notice with the combined library of the fact
|
||||
that part of it is a work based on the Library, and explaining
|
||||
where to find the accompanying uncombined form of the same work.
|
||||
|
||||
8. You may not copy, modify, sublicense, link with, or distribute
|
||||
the Library except as expressly provided under this License. Any
|
||||
attempt otherwise to copy, modify, sublicense, link with, or
|
||||
distribute the Library is void, and will automatically terminate your
|
||||
rights under this License. However, parties who have received copies,
|
||||
or rights, from you under this License will not have their licenses
|
||||
terminated so long as such parties remain in full compliance.
|
||||
|
||||
9. You are not required to accept this License, since you have not
|
||||
signed it. However, nothing else grants you permission to modify or
|
||||
distribute the Library or its derivative works. These actions are
|
||||
prohibited by law if you do not accept this License. Therefore, by
|
||||
modifying or distributing the Library (or any work based on the
|
||||
Library), you indicate your acceptance of this License to do so, and
|
||||
all its terms and conditions for copying, distributing or modifying
|
||||
the Library or works based on it.
|
||||
|
||||
10. Each time you redistribute the Library (or any work based on the
|
||||
Library), the recipient automatically receives a license from the
|
||||
original licensor to copy, distribute, link with or modify the Library
|
||||
subject to these terms and conditions. You may not impose any further
|
||||
restrictions on the recipients' exercise of the rights granted herein.
|
||||
You are not responsible for enforcing compliance by third parties with
|
||||
this License.
|
||||
|
||||
11. If, as a consequence of a court judgment or allegation of patent
|
||||
infringement or for any other reason (not limited to patent issues),
|
||||
conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot
|
||||
distribute so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you
|
||||
may not distribute the Library at all. For example, if a patent
|
||||
license would not permit royalty-free redistribution of the Library by
|
||||
all those who receive copies directly or indirectly through you, then
|
||||
the only way you could satisfy both it and this License would be to
|
||||
refrain entirely from distribution of the Library.
|
||||
|
||||
If any portion of this section is held invalid or unenforceable under any
|
||||
particular circumstance, the balance of the section is intended to apply,
|
||||
and the section as a whole is intended to apply in other circumstances.
|
||||
|
||||
It is not the purpose of this section to induce you to infringe any
|
||||
patents or other property right claims or to contest validity of any
|
||||
such claims; this section has the sole purpose of protecting the
|
||||
integrity of the free software distribution system which is
|
||||
implemented by public license practices. Many people have made
|
||||
generous contributions to the wide range of software distributed
|
||||
through that system in reliance on consistent application of that
|
||||
system; it is up to the author/donor to decide if he or she is willing
|
||||
to distribute software through any other system and a licensee cannot
|
||||
impose that choice.
|
||||
|
||||
This section is intended to make thoroughly clear what is believed to
|
||||
be a consequence of the rest of this License.
|
||||
|
||||
12. If the distribution and/or use of the Library is restricted in
|
||||
certain countries either by patents or by copyrighted interfaces, the
|
||||
original copyright holder who places the Library under this License may add
|
||||
an explicit geographical distribution limitation excluding those countries,
|
||||
so that distribution is permitted only in or among countries not thus
|
||||
excluded. In such case, this License incorporates the limitation as if
|
||||
written in the body of this License.
|
||||
|
||||
13. The Free Software Foundation may publish revised and/or new
|
||||
versions of the Lesser General Public License from time to time.
|
||||
Such new versions will be similar in spirit to the present version,
|
||||
but may differ in detail to address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the Library
|
||||
specifies a version number of this License which applies to it and
|
||||
"any later version", you have the option of following the terms and
|
||||
conditions either of that version or of any later version published by
|
||||
the Free Software Foundation. If the Library does not specify a
|
||||
license version number, you may choose any version ever published by
|
||||
the Free Software Foundation.
|
||||
|
||||
14. If you wish to incorporate parts of the Library into other free
|
||||
programs whose distribution conditions are incompatible with these,
|
||||
write to the author to ask for permission. For software which is
|
||||
copyrighted by the Free Software Foundation, write to the Free
|
||||
Software Foundation; we sometimes make exceptions for this. Our
|
||||
decision will be guided by the two goals of preserving the free status
|
||||
of all derivatives of our free software and of promoting the sharing
|
||||
and reuse of software generally.
|
||||
|
||||
NO WARRANTY
|
||||
|
||||
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
|
||||
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
|
||||
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
|
||||
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
|
||||
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
|
||||
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
|
||||
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
|
||||
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
|
||||
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
|
||||
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
|
||||
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
|
||||
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
|
||||
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
|
||||
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
|
||||
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
|
||||
DAMAGES.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Libraries
|
||||
|
||||
If you develop a new library, and you want it to be of the greatest
|
||||
possible use to the public, we recommend making it free software that
|
||||
everyone can redistribute and change. You can do so by permitting
|
||||
redistribution under these terms (or, alternatively, under the terms of the
|
||||
ordinary General Public License).
|
||||
|
||||
To apply these terms, attach the following notices to the library. It is
|
||||
safest to attach them to the start of each source file to most effectively
|
||||
convey the exclusion of warranty; and each file should have at least the
|
||||
"copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the library's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This library is free software; you can redistribute it and/or
|
||||
modify it under the terms of the GNU Lesser General Public
|
||||
License as published by the Free Software Foundation; either
|
||||
version 2.1 of the License, or (at your option) any later version.
|
||||
|
||||
This library is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
Lesser General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Lesser General Public
|
||||
License along with this library; if not, write to the Free Software
|
||||
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
|
||||
USA
|
||||
|
||||
END OF LGPL 2.1
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
|
||||
Apache 2.0
|
||||
Applies ONLY to: joor, jetty, Apache PDFBox, Apache log4j
|
||||
|
||||
|
||||
APACHE LICENSE
|
||||
Version 2.0, January 2004
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
END OF Apache 2.0
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
|
||||
MIT License
|
||||
Applies ONLY to: hid4java, jsemver
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
||||
END OF MIT License
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
END OF ATTRIBUTION, LICENSING AND SUMMARY OF QZ-TRAY COMPONENTS
|
||||
@@ -1,23 +0,0 @@
|
||||
QZ Tray
|
||||
========
|
||||
|
||||
[](../../actions) [](../../releases) [](../../issues) [](../../commits)
|
||||
|
||||
Browser plugin for sending documents and raw commands to a printer or attached device
|
||||
|
||||
## Getting Started
|
||||
* Download here https://qz.io/download/
|
||||
* See our [Getting Started](../../wiki/getting-started) guide.
|
||||
* Visit our home page https://qz.io.
|
||||
|
||||
## Support
|
||||
* File a bug via our [issue tracker](../../issues)
|
||||
* Ask the community via our [community support page](https://qz.io/support/)
|
||||
* Ask the developers via [premium support](https://qz.io/contact/) (fees may apply)
|
||||
|
||||
## Changelog
|
||||
* See our [most recent releases](../../releases)
|
||||
|
||||
## Java Developer Resources
|
||||
* [Install dependencies](../../wiki/install-dependencies)
|
||||
* [Compile, Package](../../wiki/compiling)
|
||||
@@ -1,11 +0,0 @@
|
||||
Please feel free to open bug reports on GitHub. Before opening an issue, we ask that you consider whether your issue is a support question, or a potential bug with the software.
|
||||
|
||||
If you have a support question, first [check the FAQ](https://qz.io/wiki/faq) and the [wiki](https://qz.io/wiki/Home). If you cannot find a solution please reach out to one of the appropriate channels:
|
||||
|
||||
### Community Support
|
||||
|
||||
If you need assistance using the software and do not have a paid subscription, please reference our community support channel: https://qz.io/support/
|
||||
|
||||
### Premium Support
|
||||
|
||||
If you have an active support license with QZ Industries, LLC, please send support requests to support@qz.io
|
||||
@@ -1,12 +0,0 @@
|
||||
{
|
||||
"title": "${project.name}",
|
||||
"background": "${basedir}/ant/apple/dmg-background.png",
|
||||
"icon-size": 128,
|
||||
"contents": [
|
||||
{ "x": 501, "y": 154, "type": "link", "path": "/Applications" },
|
||||
{ "x": 179, "y": 154, "type": "file", "path": "${build.dir}/${project.name}.app" }
|
||||
],
|
||||
"code-sign": {
|
||||
"signing-identity" : "${codesign.activeid}"
|
||||
}
|
||||
}
|
||||
@@ -1,28 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0"><dict>
|
||||
<key>CFBundleDevelopmentRegion</key><string>English</string>
|
||||
<key>CFBundleIconFile</key><string>${project.filename}</string>
|
||||
<key>CFBundleIdentifier</key><string>${apple.bundleid}</string>
|
||||
<key>CFBundlePackageType</key><string>APPL</string>
|
||||
<key>CFBundleGetInfoString</key><string>${project.name} ${build.version}</string>
|
||||
<key>CFBundleSignature</key><string>${project.name}</string>
|
||||
<key>CFBundleExecutable</key><string>${project.name}</string>
|
||||
<key>CFBundleVersion</key><string>${build.version}</string>
|
||||
<key>CFBundleShortVersionString</key><string>${build.version}</string>
|
||||
<key>CFBundleName</key><string>${project.name}</string>
|
||||
<key>CFBundleInfoDictionaryVersion</key><string>6.0</string>
|
||||
<key>CFBundleURLTypes</key>
|
||||
<array>
|
||||
<dict>
|
||||
<key>CFBundleURLName</key>
|
||||
<string>${project.name}</string>
|
||||
<key>CFBundleURLSchemes</key>
|
||||
<array><string>${vendor.name}</string></array>
|
||||
</dict>
|
||||
</array>
|
||||
<key>LSArchitecturePriority</key>
|
||||
<array>
|
||||
<string>${apple.target.arch}</string>
|
||||
</array>
|
||||
</dict></plist>
|
||||
@@ -1,30 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>com.apple.security.app-sandbox</key>
|
||||
<${build.sandboxed}/>
|
||||
<key>com.apple.security.network.client</key>
|
||||
<true/>
|
||||
<key>com.apple.security.network.server</key>
|
||||
<true/>
|
||||
<key>com.apple.security.files.all</key>
|
||||
<true/>
|
||||
<key>com.apple.security.print</key>
|
||||
<true/>
|
||||
<key>com.apple.security.device.usb</key>
|
||||
<true/>
|
||||
<key>com.apple.security.device.bluetooth</key>
|
||||
<true/>
|
||||
<key>com.apple.security.cs.allow-jit</key>
|
||||
<true/>
|
||||
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
|
||||
<true/>
|
||||
<key>com.apple.security.cs.disable-library-validation</key>
|
||||
<true/>
|
||||
<key>com.apple.security.cs.allow-dyld-environment-variables</key>
|
||||
<true/>
|
||||
<key>com.apple.security.cs.debugger</key>
|
||||
<true/>
|
||||
</dict>
|
||||
</plist>
|
||||
@@ -1,23 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Halt on first error
|
||||
set -e
|
||||
|
||||
# Get working directory
|
||||
DIR=$(cd "$(dirname "$0")" && pwd)
|
||||
pushd "$DIR/payload/${project.name}.app/Contents/MacOS/"
|
||||
|
||||
./"${project.name}" install >> "${install.log}" 2>&1
|
||||
popd
|
||||
|
||||
# Use install target from pkgbuild, an undocumented feature; fallback on sane location
|
||||
if [ -n "$2" ]; then
|
||||
pushd "$2/Contents/MacOS/"
|
||||
else
|
||||
pushd "/Applications/${project.name}.app/Contents/MacOS/"
|
||||
fi
|
||||
|
||||
./"${project.name}" certgen >> "${install.log}" 2>&1
|
||||
|
||||
# Start qz by calling open on the .app as an ordinary user
|
||||
su "$USER" -c "open ../../" || true
|
||||
@@ -1,31 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Halt on first error
|
||||
set -e
|
||||
|
||||
# Clear the log for writing
|
||||
> "${install.log}"
|
||||
|
||||
# Log helper
|
||||
dbg () {
|
||||
echo -e "[BASH] $(date -Iseconds)\n\t$1" >> "${install.log}" 2>&1
|
||||
}
|
||||
|
||||
# Get working directory
|
||||
dbg "Calculating working directory..."
|
||||
DIR=$(cd "$(dirname "$0")" && pwd)
|
||||
dbg "Using working directory $DIR"
|
||||
dbg "Switching to payload directory $DIR/payload/${project.name}.app/Contents/MacOS/"
|
||||
pushd "$DIR/payload/${project.name}.app/Contents/MacOS/" >> "${install.log}" 2>&1
|
||||
|
||||
# Offer to download Java if missing
|
||||
dbg "Checking for Java in payload directory..."
|
||||
if ! ./"${project.name}" --version >> "${install.log}" 2>&1; then
|
||||
dbg "Java was not found"
|
||||
osascript -e "tell app \"Installer\" to display dialog \"Java is required. Please install Java and try again.\""
|
||||
sudo -u "$USER" open "${java.download}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
dbg "Java was found in payload directory, running preinstall"
|
||||
./"${project.name}" preinstall >> "${install.log}" 2>&1
|
||||
@@ -1,6 +0,0 @@
|
||||
# Apple build properties
|
||||
apple.packager.signid=P5DMU6659X
|
||||
# jdk9+ flags
|
||||
# - Tray icon requires workaround https://github.com/dyorgio/macos-tray-icon-fixer/issues/9
|
||||
# - Dark theme requires workaround https://github.com/bobbylight/Darcula/issues/8
|
||||
apple.launch.jigsaw=--add-opens java.desktop/sun.lwawt.macosx=ALL-UNNAMED --add-opens java.desktop/java.awt=ALL-UNNAMED --add-exports java.desktop/com.apple.laf=ALL-UNNAMED
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
Before Width: | Height: | Size: 48 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 110 KiB |
@@ -1,376 +0,0 @@
|
||||
<project name="apple-installer" basedir="../../" xmlns:if="ant:if">
|
||||
<property file="ant/project.properties"/>
|
||||
<import file="${basedir}/ant/version.xml"/>
|
||||
<import file="${basedir}/ant/platform-detect.xml"/>
|
||||
|
||||
<!--
|
||||
################################################################
|
||||
# Apple Installer #
|
||||
################################################################
|
||||
-->
|
||||
|
||||
<target name="build-pkg" depends="get-identity,add-certificates,get-version,platform-detect">
|
||||
<echo level="info">Creating installer using pkgbuild</echo>
|
||||
<!--
|
||||
#####################################
|
||||
# Create scripts, payload and pkg #
|
||||
#####################################
|
||||
-->
|
||||
|
||||
<mkdir dir="${build.dir}/scripts/payload"/>
|
||||
|
||||
<!-- Get the os-preferred name for the target architecture -->
|
||||
<condition property="apple.target.arch" value="arm64">
|
||||
<isset property="target.arch.aarch64"/>
|
||||
</condition>
|
||||
<property name="apple.target.arch" value="x86_64" description="fallback value"/>
|
||||
|
||||
<!-- Build app without sandboxing by default-->
|
||||
<property name="build.sandboxed" value="false"/>
|
||||
<antcall target="build-app">
|
||||
<param name="bundle.dir" value="${build.dir}/scripts/payload/${project.name}.app"/>
|
||||
</antcall>
|
||||
<!-- Add a break in the logs -->
|
||||
<antcall target="packaging"/>
|
||||
|
||||
<!-- scripts/ -->
|
||||
<copy file="ant/apple/apple-preinstall.sh.in" tofile="${build.dir}/scripts/preinstall">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
<copy file="ant/apple/apple-postinstall.sh.in" tofile="${build.dir}/scripts/postinstall">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
<chmod perm="a+x" type="file">
|
||||
<fileset dir="${build.dir}/scripts">
|
||||
<include name="preinstall"/>
|
||||
<include name="postinstall"/>
|
||||
</fileset>
|
||||
</chmod>
|
||||
|
||||
<exec executable="pkgbuild" failonerror="true">
|
||||
<arg value="--identifier"/>
|
||||
<arg value="${apple.bundleid}"/>
|
||||
|
||||
<arg value="--nopayload"/>
|
||||
|
||||
<arg value="--install-location"/>
|
||||
<arg value="/Applications/${project.name}.app"/>
|
||||
|
||||
<arg value="--scripts"/>
|
||||
<arg value="${build.dir}/scripts"/>
|
||||
|
||||
<arg value="--version"/>
|
||||
<arg value="${build.version}"/>
|
||||
|
||||
<arg value="--sign" if:true="${codesign.available}"/>
|
||||
<arg value="${codesign.activeid}" if:true="${codesign.available}"/>
|
||||
|
||||
<arg value="${out.dir}/${project.filename}${build.type}-${build.version}-${apple.target.arch}-unbranded.pkg"/>
|
||||
</exec>
|
||||
|
||||
<!-- Branding for qz only -->
|
||||
<condition property="pkg.background" value="pkg-background.tiff" else="pkg-background-blank.tiff">
|
||||
<equals arg1="${project.filename}" arg2="qz-tray"/>
|
||||
</condition>
|
||||
|
||||
<!-- Copy branded resources to out/resources -->
|
||||
<mkdir dir="${out.dir}/resources"/>
|
||||
<copy file="${basedir}/ant/apple/${pkg.background}" tofile="${out.dir}/resources/background.tiff" failonerror="true"/>
|
||||
|
||||
<!-- Create product definition plist that stipulates supported arch -->
|
||||
<copy file="ant/apple/product-def.plist.in" tofile="${build.dir}/product-def.plist">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<!-- Create a distribution.xml file for productbuild -->
|
||||
<exec executable="productbuild" failonerror="true">
|
||||
<arg value="--synthesize"/>
|
||||
|
||||
<arg value="--sign" if:true="${codesign.available}"/>
|
||||
<arg value="${codesign.activeid}" if:true="${codesign.available}"/>
|
||||
|
||||
<arg value="--timestamp"/>
|
||||
|
||||
<arg value="--package"/>
|
||||
<arg value="${out.dir}/${project.filename}${build.type}-${build.version}-${apple.target.arch}-unbranded.pkg"/>
|
||||
|
||||
<arg value="--product"/>
|
||||
<arg value="${build.dir}/product-def.plist"/>
|
||||
|
||||
<arg value="--scripts"/>
|
||||
<arg value="${build.dir}/scripts"/>
|
||||
|
||||
<arg value="${out.dir}/distribution.xml"/>
|
||||
</exec>
|
||||
|
||||
<!-- Inject title, background -->
|
||||
<replace file="${out.dir}/distribution.xml" token="<options customize">
|
||||
<replacevalue><![CDATA[<title>@project.name@ @build.version@</title>
|
||||
<background file="background.tiff" mime-type="image/tiff" alignment="bottomleft" scaling="none"/>
|
||||
<background-darkAqua file="background.tiff" mime-type="image/tiff" alignment="bottomleft" scaling="none"/>
|
||||
<options customize]]></replacevalue>
|
||||
<replacefilter token="@project.name@" value="${project.name}"/>
|
||||
<replacefilter token="@build.version@" value="${build.version}"/>
|
||||
</replace>
|
||||
|
||||
<!-- Create a branded .pkg using productbuild -->
|
||||
<exec executable="productbuild" dir="${out.dir}" failonerror="true">
|
||||
<arg value="--sign" if:true="${codesign.available}"/>
|
||||
<arg value="${codesign.activeid}" if:true="${codesign.available}"/>
|
||||
|
||||
<arg value="--timestamp"/>
|
||||
|
||||
<arg value="--distribution"/>
|
||||
<arg value="${out.dir}/distribution.xml"/>
|
||||
|
||||
<arg value="--resources"/>
|
||||
<arg value="${out.dir}/resources"/>
|
||||
|
||||
<arg value="--product"/>
|
||||
<arg value="${build.dir}/product-def.plist"/>
|
||||
|
||||
<arg value="--package-path"/>
|
||||
<arg value="${project.filename}${build.type}-${build.version}-${apple.target.arch}-unbranded.pkg"/>
|
||||
|
||||
<arg value="${out.dir}/${project.filename}${build.type}-${build.version}-${apple.target.arch}.pkg"/>
|
||||
</exec>
|
||||
|
||||
<!-- Cleanup unbranded version -->
|
||||
<delete file="${out.dir}/${project.filename}${build.type}-${build.version}-${apple.target.arch}-unbranded.pkg"/>
|
||||
</target>
|
||||
|
||||
<target name="build-dmg" depends="get-identity,add-certificates,get-version">
|
||||
<echo level="info">Creating app bundle</echo>
|
||||
<!--
|
||||
#####################################
|
||||
# Create payload and bundle as dmg #
|
||||
#####################################
|
||||
-->
|
||||
|
||||
<!-- Dmg JSON -->
|
||||
<copy file="ant/apple/appdmg.json.in" tofile="${build.dir}/appdmg.json">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<!-- Build app with sandboxing by default-->
|
||||
<property name="build.sandboxed" value="true"/>
|
||||
<antcall target="build-app">
|
||||
<param name="bundle.dir" value="${build.dir}/${project.name}.app"/>
|
||||
</antcall>
|
||||
<!-- Add a break in the logs -->
|
||||
<antcall target="packaging"/>
|
||||
|
||||
<exec executable="appdmg" failonerror="true">
|
||||
<arg value="${build.dir}/appdmg.json"/>
|
||||
<arg value="${out.dir}/${project.filename}${build.type}-${build.version}.dmg"/>
|
||||
</exec>
|
||||
</target>
|
||||
|
||||
<target name="build-app" depends="get-identity">
|
||||
<!-- App Bundle -->
|
||||
<mkdir dir="${bundle.dir}"/>
|
||||
|
||||
<!-- Contents/ -->
|
||||
<copy file="ant/apple/apple-bundle.plist.in" tofile="${bundle.dir}/Contents/Info.plist">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<!-- Contents/MacOS/ -->
|
||||
<mkdir dir="${bundle.dir}/Contents/MacOS"/>
|
||||
<copy file="ant/unix/unix-launcher.sh.in" tofile="${bundle.dir}/Contents/MacOS/${project.name}">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<!-- Contents/Resources/ -->
|
||||
<copy todir="${bundle.dir}/Contents/Resources">
|
||||
<fileset dir="${dist.dir}">
|
||||
<include name="${project.filename}.jar"/>
|
||||
<include name="LICENSE.txt"/>
|
||||
<include name="override.crt"/>
|
||||
</fileset>
|
||||
</copy>
|
||||
<copy file="assets/branding/apple-icon.icns" tofile="${bundle.dir}/Contents/Resources/${project.filename}.icns"/>
|
||||
|
||||
<copy file="ant/unix/unix-uninstall.sh.in" tofile="${bundle.dir}/Contents/Resources/uninstall">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<copy todir="${bundle.dir}/Contents/Resources/demo">
|
||||
<fileset dir="${dist.dir}/demo" includes="**"/>
|
||||
</copy>
|
||||
|
||||
<!-- Provision files -->
|
||||
<delete dir="${bundle.dir}/Contents/Resources/provision" failonerror="false"/>
|
||||
<copy todir="${bundle.dir}/Contents/Resources/provision" failonerror="false">
|
||||
<fileset dir="${provision.dir}" includes="**"/>
|
||||
</copy>
|
||||
<chmod perm="a+x" type="file" verbose="true">
|
||||
<fileset dir="${bundle.dir}/Contents/Resources/" casesensitive="false">
|
||||
<!-- Must iterate on parent directory in case "provision" is missing -->
|
||||
<include name="provision/*"/>
|
||||
<exclude name="provision/*.crt"/>
|
||||
<exclude name="provision/*.txt"/>
|
||||
<exclude name="provision/*.json"/>
|
||||
</fileset>
|
||||
</chmod>
|
||||
|
||||
<!-- Java runtime -->
|
||||
<copy todir="${bundle.dir}/Contents/PlugIns/Java.runtime">
|
||||
<fileset dir="${dist.dir}/Java.runtime" includes="**"/>
|
||||
</copy>
|
||||
<copy todir="${bundle.dir}/Contents/Frameworks">
|
||||
<fileset dir="${dist.dir}/libs" includes="**"/>
|
||||
</copy>
|
||||
|
||||
<copy todir="${bundle.dir}">
|
||||
<fileset dir="${bundle.dir}" includes="**"/>
|
||||
</copy>
|
||||
|
||||
<!-- set payload files executable -->
|
||||
<chmod perm="a+x" type="file">
|
||||
<fileset dir="${bundle.dir}">
|
||||
<include name="**/${project.name}"/>
|
||||
<include name="**/Resources/uninstall"/>
|
||||
<include name="**/bin/*"/>
|
||||
<include name="**/lib/jspawnhelper"/>
|
||||
</fileset>
|
||||
</chmod>
|
||||
|
||||
<copy file="ant/apple/apple-entitlements.plist.in" tofile="${build.dir}/apple-entitlements.plist">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<!-- use xargs to loop over and codesign all files-->
|
||||
<echo level="info" message="Signing ${bundle.dir} using ${codesign.activeid}"/>
|
||||
<!-- Find -X fails on spaces but doesn't failonerror, this may lead to overlooked errors. -->
|
||||
<!-- Currently the only file that may contains a space is the main executable which we omit from signing anyway. -->
|
||||
<exec executable="bash" failonerror="true" dir="${bundle.dir}">
|
||||
<arg value="-c"/>
|
||||
<arg value="find -X "." -type f -not -path "*/Contents/MacOS/*" -exec sh -c 'file -I "{}" |grep -m1 "x-mach-binary"|cut -f 1 -d \:' \; |xargs codesign --force -s "${codesign.activeid}" --timestamp --options runtime"/>
|
||||
</exec>
|
||||
<exec executable="codesign" failonerror="true">
|
||||
<arg value="--force"/>
|
||||
<arg value="-s"/>
|
||||
<arg value="${codesign.activeid}"/>
|
||||
<arg value="--timestamp"/>
|
||||
<arg value="--options"/>
|
||||
<arg value="runtime"/>
|
||||
<arg value="--entitlement"/>
|
||||
<arg value="${build.dir}/apple-entitlements.plist"/>
|
||||
<arg value="${bundle.dir}/Contents/PlugIns/Java.runtime/Contents/Home/bin/java"/>
|
||||
<arg value="${bundle.dir}/Contents/PlugIns/Java.runtime/Contents/Home/bin/jcmd"/>
|
||||
<arg value="${bundle.dir}/Contents/PlugIns/Java.runtime"/>
|
||||
</exec>
|
||||
<exec executable="codesign" failonerror="true">
|
||||
<arg value="-s"/>
|
||||
<arg value="${codesign.activeid}"/>
|
||||
<arg value="--timestamp"/>
|
||||
<arg value="--options"/>
|
||||
<arg value="runtime"/>
|
||||
<arg value="--entitlement"/>
|
||||
<arg value="${build.dir}/apple-entitlements.plist"/>
|
||||
<arg value="${bundle.dir}"/>
|
||||
</exec>
|
||||
|
||||
<!-- Verify Java.runtime -->
|
||||
<antcall target="verify-signature">
|
||||
<param name="signed.bundle.name" value="Java.runtime"/>
|
||||
<param name="signed.bundle.dir" value="${bundle.dir}/Contents/PlugIns/Java.runtime"/>
|
||||
</antcall>
|
||||
<!-- Verify QZ Tray.app -->
|
||||
<antcall target="verify-signature" >
|
||||
<param name="signed.bundle.name" value="${project.name}.app"/>
|
||||
<param name="signed.bundle.dir" value="${bundle.dir}"/>
|
||||
</antcall>
|
||||
</target>
|
||||
|
||||
<target name="add-certificates" depends="get-identity">
|
||||
<!-- Remove expired certificates -->
|
||||
<exec executable="security">
|
||||
<arg value="delete-certificate"/>
|
||||
<arg value="-Z"/>
|
||||
<arg value="A69020D49B47383064ADD5779911822850235953"/>
|
||||
</exec>
|
||||
<exec executable="security">
|
||||
<arg value="delete-certificate"/>
|
||||
<arg value="-Z"/>
|
||||
<arg value="6FD7892971854384AF40FAD1E0E6C56A992BC5EE"/>
|
||||
</exec>
|
||||
<exec executable="security">
|
||||
<arg value="delete-certificate"/>
|
||||
<arg value="-Z"/>
|
||||
<arg value="F7F10838412D9187042EE1EB018794094AFA189A"/>
|
||||
</exec>
|
||||
|
||||
<exec executable="security">
|
||||
<arg value="add-certificates"/>
|
||||
<arg value="${basedir}/ant/apple/certs/apple-packager.cer"/>
|
||||
<arg value="${basedir}/ant/apple/certs/apple-intermediate.cer"/>
|
||||
<arg value="${basedir}/ant/apple/certs/apple-codesign.cer"/>
|
||||
</exec>
|
||||
</target>
|
||||
|
||||
<target name="copy-dylibs" if="target.os.mac">
|
||||
<echo level="info">Copying native library files to libs</echo>
|
||||
|
||||
<mkdir dir="${dist.dir}/libs"/>
|
||||
<copy todir="${dist.dir}/libs" flatten="true" verbose="true">
|
||||
<fileset dir="${out.dir}/libs-temp">
|
||||
<!--x86_64-->
|
||||
<include name="**/darwin-x86-64/*" if="target.arch.x86_64"/> <!-- jna/hid4java -->
|
||||
<include name="**/osx-x86_64/*" if="target.arch.x86_64"/> <!-- usb4java -->
|
||||
<include name="**/osx_64/*" if="target.arch.x86_64"/> <!-- jssc -->
|
||||
<!--aarch64-->
|
||||
<include name="**/darwin-aarch64/*" if="target.arch.aarch64"/> <!-- jna/hid4java -->
|
||||
<include name="**/osx-aarch64/*" if="target.arch.aarch64"/> <!-- usb4java -->
|
||||
<include name="**/osx_arm64/*" if="target.arch.aarch64"/> <!-- jssc -->
|
||||
</fileset>
|
||||
</copy>
|
||||
</target>
|
||||
|
||||
<target name="get-identity">
|
||||
<property file="ant/apple/apple.properties"/>
|
||||
<!-- Ensure ${apple.packager.signid} is in Keychain -->
|
||||
<exec executable="bash" failonerror="false" resultproperty="codesign.qz">
|
||||
<arg value="-c"/>
|
||||
<arg value="security find-identity -v |grep '(${apple.packager.signid})'"/>
|
||||
</exec>
|
||||
<!-- Fallback to "-" (ad-hoc) if ${apple.packager.signid} isn't found -->
|
||||
<condition property="codesign.activeid" value="${apple.packager.signid}" else="-">
|
||||
<equals arg1="${codesign.qz}" arg2="0"/>
|
||||
</condition>
|
||||
|
||||
<!-- Fallback to "-" (ad-hoc) if ${apple.packager.signid} isn't found -->
|
||||
<condition property="codesign.available">
|
||||
<equals arg1="${codesign.qz}" arg2="0"/>
|
||||
</condition>
|
||||
|
||||
<!-- Property to show warning later -->
|
||||
<condition property="codesign.selfsign">
|
||||
<equals arg1="${codesign.activeid}" arg2="-"/>
|
||||
</condition>
|
||||
</target>
|
||||
|
||||
<target name="verify-signature">
|
||||
<echo level="info">Verifying ${signed.bundle.name} Signature</echo>
|
||||
<echo level="info">Location: ${signed.bundle.dir}</echo>
|
||||
|
||||
<exec executable="codesign" failifexecutionfails="false" resultproperty="signing.status">
|
||||
<arg value="-v"/>
|
||||
<arg value="--strict"/>
|
||||
<arg value="${signed.bundle.dir}"/>
|
||||
</exec>
|
||||
<condition property="message.severity" value="info" else="warn">
|
||||
<equals arg1="${signing.status}" arg2="0"/>
|
||||
</condition>
|
||||
<condition property="message.description"
|
||||
value="Signing passed: Successfully signed"
|
||||
else="Signing failed:: Signing failed (will prevent app from launching)">
|
||||
<equals arg1="${signing.status}" arg2="0"/>
|
||||
</condition>
|
||||
<echo level="${message.severity}">${message.description}</echo>
|
||||
</target>
|
||||
|
||||
<!-- Stub title/separator workaround for build-pkg/build-dmg -->
|
||||
<target name="packaging"/>
|
||||
</project>
|
||||
Binary file not shown.
Binary file not shown.
@@ -1,10 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>arch</key>
|
||||
<array>
|
||||
<string>${apple.target.arch}</string>
|
||||
</array>
|
||||
</dict>
|
||||
</plist>
|
||||
@@ -1,221 +0,0 @@
|
||||
<project name="javafx" default="download-javafx" basedir="..">
|
||||
<property file="ant/project.properties"/>
|
||||
<import file="${basedir}/ant/platform-detect.xml"/>
|
||||
<import file="${basedir}/ant/version.xml"/>
|
||||
|
||||
<!-- TODO: Short-circuit download if host and target are identical? -->
|
||||
<target name="download-javafx" depends="download-javafx-host,download-javafx-target"/>
|
||||
|
||||
<target name="download-javafx-host" unless="${host.fx.exists}" depends="get-javafx-versions,host-fx-exists">
|
||||
<antcall target="download-extract-javafx">
|
||||
<param name="fx.os" value="${host.os}"/>
|
||||
<param name="fx.arch" value="${host.arch}"/>
|
||||
<param name="fx.id" value="${host.fx.id}"/>
|
||||
<param name="fx.basedir" value="${host.fx.basedir}"/>
|
||||
<param name="fx.dir" value="${host.fx.dir}"/>
|
||||
<param name="fx.ver" value="${host.fx.ver}"/>
|
||||
<param name="fx.majver" value="${host.fx.majver}"/>
|
||||
<param name="fx.urlver" value="${host.fx.urlver}"/>
|
||||
</antcall>
|
||||
</target>
|
||||
|
||||
<target name="download-javafx-target" unless="${target.fx.exists}" depends="get-javafx-versions,target-fx-exists">
|
||||
<antcall target="download-extract-javafx">
|
||||
<param name="fx.os" value="${target.os}"/>
|
||||
<param name="fx.arch" value="${target.arch}"/>
|
||||
<param name="fx.id" value="${target.fx.id}"/>
|
||||
<param name="fx.basedir" value="${target.fx.basedir}"/>
|
||||
<param name="fx.dir" value="${target.fx.dir}"/>
|
||||
<param name="fx.majver" value="${target.fx.majver}"/>
|
||||
<param name="fx.urlver" value="${target.fx.urlver}"/>
|
||||
</antcall>
|
||||
</target>
|
||||
|
||||
<target name="host-fx-exists" depends="platform-detect">
|
||||
<!-- Host fx is saved to lib/ -->
|
||||
<property name="host.fx.basedir" value="${basedir}/lib"/>
|
||||
<property name="host.fx.id" value="javafx-${host.os}-${host.arch}-${host.fx.urlver}"/>
|
||||
<property name="host.fx.dir" value="${host.fx.basedir}/${host.fx.id}"/>
|
||||
<mkdir dir="${host.fx.dir}"/>
|
||||
|
||||
<!-- File to look for: "glass.dll", "libglass.dylib" or "libglass.so" -->
|
||||
<property name="host.libglass" value="${host.libprefix}glass.${host.libext}"/>
|
||||
|
||||
<!-- Grab the first file match -->
|
||||
<first id="host.fx.files">
|
||||
<fileset dir="${host.fx.dir}">
|
||||
<include name="**/${host.libglass}"/>
|
||||
</fileset>
|
||||
</first>
|
||||
<!-- Convert the file to a usable string -->
|
||||
<pathconvert property="host.fx.path" refid="host.fx.files"/>
|
||||
|
||||
<!-- Set our flag if found -->
|
||||
<condition property="host.fx.exists">
|
||||
<not><equals arg1="${host.fx.path}" arg2=""/></not>
|
||||
</condition>
|
||||
|
||||
<!-- Human readable message -->
|
||||
<condition property="host.fx.message"
|
||||
value="JavaFX host platform file ${host.libglass} found, skipping download.${line.separator}Location: ${host.fx.path}"
|
||||
else="JavaFX host platform file ${host.libglass} is missing, will download.${line.separator}Searched: ${host.fx.dir}">
|
||||
<isset property="host.fx.exists"/>
|
||||
</condition>
|
||||
|
||||
<echo level="info">${host.fx.message}</echo>
|
||||
</target>
|
||||
|
||||
<target name="target-fx-exists">
|
||||
<!-- Target fx is saved to out/ -->
|
||||
<property name="target.fx.basedir" value="${out.dir}"/>
|
||||
<property name="target.fx.id" value="javafx-${target.os}-${target.arch}-${target.fx.urlver}"/>
|
||||
<property name="target.fx.dir" value="${target.fx.basedir}/${target.fx.id}"/>
|
||||
<mkdir dir="${target.fx.dir}"/>
|
||||
|
||||
<!-- File to look for: "glass.dll", "libglass.dylib" or "libglass.so" -->
|
||||
<property name="target.libglass" value="${target.libprefix}glass.${target.libext}"/>
|
||||
|
||||
<!-- Grab the first file match -->
|
||||
<first id="target.fx.files">
|
||||
<fileset dir="${target.fx.dir}">
|
||||
<!-- look for "glass.dll", "libglass.dylib" or "libglass.so" -->
|
||||
<include name="**/${target.libglass}"/>
|
||||
</fileset>
|
||||
</first>
|
||||
<!-- Convert the file to a usable string -->
|
||||
<pathconvert property="target.fx.path" refid="target.fx.files"/>
|
||||
|
||||
<!-- Set our flag if found -->
|
||||
<condition property="target.fx.exists">
|
||||
<not><equals arg1="${target.fx.path}" arg2=""/></not>
|
||||
</condition>
|
||||
|
||||
<!-- Human readable message -->
|
||||
<condition property="target.fx.message"
|
||||
value="JavaFX target platform file ${target.libglass} found, skipping download.${line.separator}Location: ${target.fx.path}"
|
||||
else="JavaFX target platform file ${target.libglass} is missing, will download.${line.separator}Searched: ${target.fx.dir}">
|
||||
<isset property="target.fx.exists"/>
|
||||
</condition>
|
||||
|
||||
<echo level="info">${target.fx.message}</echo>
|
||||
</target>
|
||||
|
||||
<!--
|
||||
Populates: host.fx.ver, host.fx.urlver, target.fx.ver, target.fx.urlver
|
||||
|
||||
- Converts version to a usable URL format
|
||||
- Leverage older releases for Intel builds until upstream bug report SUPQZ-14 is fixed
|
||||
|
||||
To build: We need javafx to download a javafx which matches "host.os" and "host.arch"
|
||||
To package: We need javafx to download a javafx which matches "target.os" and "target.arch"
|
||||
-->
|
||||
<target name="get-javafx-versions" depends="platform-detect">
|
||||
<!-- Fallback to sane values -->
|
||||
<property name="host.fx.ver" value="${javafx.version}"/>
|
||||
<property name="target.fx.ver" value="${javafx.version}"/>
|
||||
|
||||
<!-- Handle pesky url "." = "-" differences -->
|
||||
<loadresource property="host.fx.urlver">
|
||||
<propertyresource name="host.fx.ver"/>
|
||||
<filterchain>
|
||||
<tokenfilter>
|
||||
<filetokenizer/>
|
||||
<replacestring from="." to="-"/>
|
||||
</tokenfilter>
|
||||
</filterchain>
|
||||
</loadresource>
|
||||
<loadresource property="target.fx.urlver">
|
||||
<propertyresource name="target.fx.ver"/>
|
||||
<filterchain>
|
||||
<tokenfilter>
|
||||
<filetokenizer/>
|
||||
<replacestring from="." to="-"/>
|
||||
</tokenfilter>
|
||||
</filterchain>
|
||||
</loadresource>
|
||||
<property description="suppress property warning" name="target.fx.urlver" value="something went wrong"/>
|
||||
<property description="suppress property warning" name="host.fx.urlver" value="something went wrong"/>
|
||||
|
||||
<!-- Calculate our javafx "major" version -->
|
||||
<loadresource property="host.fx.majver">
|
||||
<propertyresource name="host.fx.ver"/>
|
||||
<filterchain>
|
||||
<replaceregex pattern="[-_.].*" replace="" />
|
||||
</filterchain>
|
||||
</loadresource>
|
||||
<loadresource property="target.fx.majver">
|
||||
<propertyresource name="target.fx.ver"/>
|
||||
<filterchain>
|
||||
<replaceregex pattern="[-_.].*" replace="" />
|
||||
</filterchain>
|
||||
</loadresource>
|
||||
<property description="suppress property warning" name="target.fx.majver" value="something went wrong"/>
|
||||
<property description="suppress property warning" name="host.fx.majver" value="something went wrong"/>
|
||||
|
||||
<echo level="info">
|
||||
JavaFX host platform:
|
||||
Version: ${host.fx.ver} (${host.os}, ${host.arch})
|
||||
Major Version: ${host.fx.majver}
|
||||
URLs: "${host.fx.urlver}"
|
||||
|
||||
JavaFX target platform:
|
||||
Version: ${target.fx.ver} (${target.os}, ${target.arch})
|
||||
Major Version: ${target.fx.majver}
|
||||
URLs: ""${target.fx.urlver}"
|
||||
</echo>
|
||||
</target>
|
||||
|
||||
<!-- Downloads and extracts javafx for the specified platform -->
|
||||
<target name="download-extract-javafx">
|
||||
<!-- Cleanup old versions -->
|
||||
<delete includeemptydirs="true" defaultexcludes="false">
|
||||
<fileset dir="${fx.basedir}">
|
||||
<include name="javafx*/"/>
|
||||
</fileset>
|
||||
</delete>
|
||||
<mkdir dir="${fx.dir}"/>
|
||||
|
||||
<!-- Valid os values: "windows", "linux", "osx" -->
|
||||
<!-- translate "mac" to "osx" -->
|
||||
<condition property="fx.os.fixed" value="osx" else="${fx.os}">
|
||||
<equals arg1="${fx.os}" arg2="mac"/>
|
||||
</condition>
|
||||
|
||||
<!-- Valid arch values: "x64", "aarch64", "x86" -->
|
||||
<!-- translate "x86_64" to "x64" -->
|
||||
<condition property="fx.arch.fixed" value="x64">
|
||||
<or>
|
||||
<equals arg1="${fx.arch}" arg2="x86_64"/>
|
||||
<and>
|
||||
<!-- TODO: Remove "aarch64" to "x64" when windows aarch64 binaries become available -->
|
||||
<equals arg1="${fx.arch}" arg2="aarch64"/>
|
||||
<equals arg1="${fx.os}" arg2="windows"/>
|
||||
</and>
|
||||
<and>
|
||||
<!-- TODO: Remove "riscv" to "x64" when linux riscv64 binaries become available -->
|
||||
<equals arg1="${fx.arch}" arg2="riscv64"/>
|
||||
<equals arg1="${fx.os}" arg2="linux"/>
|
||||
</and>
|
||||
</or>
|
||||
</condition>
|
||||
<property name="fx.arch.fixed" value="${fx.arch}" description="fallback value"/>
|
||||
|
||||
<!-- Fix underscore when "monocle" is missing -->
|
||||
<condition property="fx.url" value="${javafx.mirror}/${fx.majver}/openjfx-${fx.urlver}_${fx.os.fixed}-${fx.arch.fixed}_bin-sdk.zip">
|
||||
<not>
|
||||
<contains string="${fx.urlver}" substring="monocle"/>
|
||||
</not>
|
||||
</condition>
|
||||
|
||||
<property name="fx.url" value="${javafx.mirror}/${fx.majver}/openjfx-${fx.urlver}-${fx.os.fixed}-${fx.arch.fixed}_bin-sdk.zip"/>
|
||||
<property name="fx.zip" value="${out.dir}/${fx.id}.zip"/>
|
||||
|
||||
<echo level="info">Downloading JavaFX from ${fx.url}</echo>
|
||||
<echo level="info">Temporarily saving JavaFX to ${fx.zip}</echo>
|
||||
|
||||
<mkdir dir="${out.dir}"/>
|
||||
<get src="${fx.url}" verbose="true" dest="${fx.zip}"/>
|
||||
<unzip src="${fx.zip}" dest="${fx.dir}" overwrite="true"/>
|
||||
<delete file="${fx.zip}"/>
|
||||
</target>
|
||||
</project>
|
||||
Binary file not shown.
@@ -1,109 +0,0 @@
|
||||
# 2018 Yohanes Nugroho <yohanes@gmail.com> (@yohanes)
|
||||
#
|
||||
# 1. Download icu4j source code, build using ant.
|
||||
# It will generate icu4j.jar and icu4j-charset.jar
|
||||
#
|
||||
# 2. Run slim-icu.py to generate slim version.
|
||||
#
|
||||
# To invoke from ant, add python to $PATH
|
||||
# and add the following to build.xml:
|
||||
#
|
||||
# <target name="distill-icu" depends="init">
|
||||
# <exec executable="python">
|
||||
# <arg line="ant/lib/slim-icu.py lib/charsets"/>
|
||||
# </exec>
|
||||
# </target>
|
||||
#
|
||||
# ... then call: ant distill-icu
|
||||
#
|
||||
# 3. Overwrite files in lib/charsets/
|
||||
|
||||
# slim ICU
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
import zipfile
|
||||
from zipfile import ZipFile
|
||||
|
||||
directory = str(Path(__file__).resolve().parent)
|
||||
if len(sys.argv) > 1:
|
||||
directory = sys.argv[1]
|
||||
|
||||
mode = zipfile.ZIP_DEFLATED
|
||||
|
||||
|
||||
def keep_file(filename):
|
||||
# skip all break iterators
|
||||
if filename.endswith(".brk") \
|
||||
or filename.endswith(".dict") \
|
||||
or filename.endswith("unames.icu") \
|
||||
or filename.endswith("ucadata.icu") \
|
||||
or filename.endswith(".spp"):
|
||||
return False
|
||||
|
||||
# keep english and arabic
|
||||
if filename.startswith("en") \
|
||||
or filename.startswith("ar") \
|
||||
or not filename.endswith(".res"):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
zin = ZipFile(os.path.join(directory, 'icu4j.jar'), 'r')
|
||||
zout = ZipFile(os.path.join(directory, 'icu4j-slim.jar'), 'w', mode)
|
||||
|
||||
for item in zin.infolist():
|
||||
buff = zin.read(item.filename)
|
||||
print(item.filename)
|
||||
|
||||
if keep_file(item.filename):
|
||||
print("Keep")
|
||||
zout.writestr(item, buff)
|
||||
else:
|
||||
print("Remove")
|
||||
|
||||
zout.close()
|
||||
zin.close()
|
||||
|
||||
|
||||
def keep_charset_file(filename):
|
||||
to_remove = [
|
||||
"cns-11643-1992.cnv",
|
||||
"ebcdic-xml-us.cnv",
|
||||
"euc-jp-2007.cnv",
|
||||
"euc-tw-2014.cnv",
|
||||
"gb18030.cnv",
|
||||
"ibm-1363_P11B-1998.cnv",
|
||||
"ibm-1364_P110-2007.cnv",
|
||||
"ibm-1371_P100-1999.cnv",
|
||||
"ibm-1373_P100-2002.cnv",
|
||||
"ibm-1375_P100-2008.cnv",
|
||||
"ibm-1383_P110-1999.cnv",
|
||||
"ibm-1386_P100-2001.cnv",
|
||||
"ibm-1388_P103-2001.cnv",
|
||||
"ibm-1390_P110-2003.cnv"
|
||||
]
|
||||
|
||||
for i in to_remove:
|
||||
if i in filename:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
zin = ZipFile(os.path.join(directory, 'icu4j-charset.jar'), 'r')
|
||||
zout = ZipFile(os.path.join(directory, 'icu4j-charset-slim.jar'), 'w', mode)
|
||||
|
||||
for item in zin.infolist():
|
||||
buff = zin.read(item.filename)
|
||||
print(item.filename, end=' ')
|
||||
|
||||
if keep_charset_file(item.filename):
|
||||
print("Keep")
|
||||
zout.writestr(item, buff)
|
||||
else:
|
||||
print("Remove")
|
||||
|
||||
zout.close()
|
||||
zin.close()
|
||||
@@ -1,69 +0,0 @@
|
||||
<project name="linux-installer" basedir="../../">
|
||||
<property file="ant/project.properties"/>
|
||||
<property file="ant/linux/linux.properties"/>
|
||||
<import file="${basedir}/ant/version.xml"/>
|
||||
<import file="${basedir}/ant/platform-detect.xml"/>
|
||||
|
||||
<target name="build-run" depends="get-version,platform-detect">
|
||||
<echo level="info">Creating installer using makeself</echo>
|
||||
|
||||
<!-- Get the os-preferred name for the target architecture -->
|
||||
<condition property="linux.target.arch" value="arm64">
|
||||
<isset property="target.arch.aarch64"/>
|
||||
</condition>
|
||||
<property name="linux.target.arch" value="${target.arch}" description="fallback value"/>
|
||||
|
||||
<copy file="assets/branding/linux-icon.svg" tofile="${dist.dir}/${project.filename}.svg"/>
|
||||
|
||||
<mkdir dir="${build.dir}/scripts"/>
|
||||
<copy file="ant/linux/linux-installer.sh.in" tofile="${dist.dir}/install">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<copy file="ant/unix/unix-launcher.sh.in" tofile="${dist.dir}/${project.filename}">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<copy file="ant/unix/unix-uninstall.sh.in" tofile="${dist.dir}/uninstall">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<chmod perm="a+x" type="file">
|
||||
<fileset dir="${dist.dir}">
|
||||
<include name="**/${project.filename}"/>
|
||||
<include name="**/install"/>
|
||||
<include name="**/uninstall"/>
|
||||
</fileset>
|
||||
</chmod>
|
||||
|
||||
<exec executable="makeself" failonerror="true">
|
||||
<arg value="${dist.dir}"/>
|
||||
<arg value="${out.dir}/${project.filename}${build.type}-${build.version}-${linux.target.arch}.run"/>
|
||||
<arg value="${project.name} Installer"/>
|
||||
<arg value="./install"/>
|
||||
</exec>
|
||||
</target>
|
||||
|
||||
<target name="copy-solibs" if="target.os.linux">
|
||||
<echo level="info">Copying native library files to libs</echo>
|
||||
|
||||
<mkdir dir="${dist.dir}/libs"/>
|
||||
<copy todir="${dist.dir}/libs" flatten="true" verbose="true">
|
||||
<fileset dir="${out.dir}/libs-temp">
|
||||
<!--x86_64-->
|
||||
<include name="**/linux-x86-64/*" if="target.arch.x86_64"/> <!-- jna/hid4java -->
|
||||
<include name="**/linux-x86_64/*" if="target.arch.x86_64"/> <!-- usb4java -->
|
||||
<include name="**/linux_64/*" if="target.arch.x86_64"/> <!-- jssc -->
|
||||
<!--aarch64-->
|
||||
<include name="**/linux-aarch64/*" if="target.arch.aarch64"/> <!-- jna/hid4java/usb4java -->
|
||||
<include name="**/linux_arm64/*" if="target.arch.aarch64"/> <!-- jssc -->
|
||||
<!--arm32-->
|
||||
<include name="**/linux-arm/*" if="target.arch.arm32"/> <!-- jna/hid4java/usb4java -->
|
||||
<include name="**/linux_arm/*" if="target.arch.arm32"/> <!-- jssc -->
|
||||
<!--riscv64-->
|
||||
<include name="**/linux-riscv64/*" if="target.arch.riscv64"/> <!-- jna/hid4java -->
|
||||
<include name="**/linux_riscv64/*" if="target.arch.riscv64"/> <!-- jssc -->
|
||||
</fileset>
|
||||
</copy>
|
||||
</target>
|
||||
</project>
|
||||
@@ -1,68 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Halt on first error
|
||||
set -e
|
||||
|
||||
if [ "$(id -u)" != "0" ]; then
|
||||
echo "This script must be run with root (sudo) privileges" 1>&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Console colors
|
||||
RED="\\x1B[1;31m";GREEN="\\x1B[1;32m";YELLOW="\\x1B[1;33m";PLAIN="\\x1B[0m"
|
||||
|
||||
# Statuses
|
||||
SUCCESS=" [${GREEN}success${PLAIN}]"
|
||||
FAILURE=" [${RED}failure${PLAIN}]"
|
||||
WARNING=" [${YELLOW}warning${PLAIN}]"
|
||||
|
||||
mask=755
|
||||
|
||||
echo -e "Starting install...\n"
|
||||
|
||||
# Clear the log for writing
|
||||
> "${install.log}"
|
||||
|
||||
run_task () {
|
||||
echo -e "Running $1 task..."
|
||||
if [ -n "$DEBUG" ]; then
|
||||
"./${project.filename}" $@ && ret_val=$? || ret_val=$?
|
||||
else
|
||||
"./${project.filename}" $@ &>> "${install.log}" && ret_val=$? || ret_val=$?
|
||||
fi
|
||||
|
||||
if [ $ret_val -eq 0 ]; then
|
||||
echo -e " $SUCCESS Task $1 was successful"
|
||||
else
|
||||
if [ "$1" == "spawn" ]; then
|
||||
echo -e " $WARNING Task $1 skipped. You'll have to start ${project.name} manually."
|
||||
return
|
||||
fi
|
||||
echo -e " $FAILURE Task $1 failed.\n\nRe-run with DEBUG=true for more information."
|
||||
false # throw error
|
||||
fi
|
||||
}
|
||||
|
||||
# Ensure java is installed and working before starting
|
||||
"./${project.filename}" --version
|
||||
|
||||
# Make a temporary jar for preliminary installation steps
|
||||
run_task preinstall
|
||||
|
||||
run_task install --dest "/opt/${project.filename}"
|
||||
|
||||
# We should be installed now, generate the certificate
|
||||
pushd "/opt/${project.filename}" &> /dev/null
|
||||
run_task certgen
|
||||
|
||||
# Tell the desktop to look for new mimetypes in the background
|
||||
umask_bak="$(umask)"
|
||||
umask 0002 # more permissive umask for mimetype registration
|
||||
update-desktop-database &> /dev/null &
|
||||
umask "$umask_bak"
|
||||
|
||||
echo "Installation complete... Starting ${project.name}..."
|
||||
# spawn itself as a regular user, inheriting environment
|
||||
run_task spawn "/opt/${project.filename}/${project.filename}"
|
||||
|
||||
popd &> /dev/null
|
||||
@@ -1,2 +0,0 @@
|
||||
# Expose UNIXToolkit.getGtkVersion
|
||||
linux.launch.jigsaw=--add-opens java.desktop/sun.awt=ALL-UNNAMED
|
||||
@@ -1,254 +0,0 @@
|
||||
<project name="host-info" default="platform-detect" basedir="..">
|
||||
<property file="ant/project.properties"/>
|
||||
<!--
|
||||
Detects and echos host and target information
|
||||
|
||||
String:
|
||||
- host.os, host.arch, host.libext, host.libprefix
|
||||
- target.os, target.arch, target.libext, target.libprefix
|
||||
|
||||
Booleans:
|
||||
- host.${host.arch}=true, host.${host.os}=true
|
||||
- target.${target.arch}=true, target.${target.os}=true
|
||||
-->
|
||||
<target name="platform-detect" depends="get-target-os,get-target-arch,get-libext">
|
||||
<!-- Echo host information -->
|
||||
<antcall target="echo-platform">
|
||||
<param name="title" value="Host"/>
|
||||
<param name="prefix" value="host"/>
|
||||
<param name="prefix.os" value="${host.os}"/>
|
||||
<param name="prefix.arch" value="${host.arch}"/>
|
||||
<param name="prefix.libext" value="${host.libext}"/>
|
||||
</antcall>
|
||||
<!-- Echo target information -->
|
||||
<antcall target="echo-platform">
|
||||
<param name="title" value="Target"/>
|
||||
<param name="prefix" value="target"/>
|
||||
<param name="prefix.os" value="${target.os}"/>
|
||||
<param name="prefix.arch" value="${target.arch}"/>
|
||||
<param name="prefix.libext" value="${target.libext}"/>
|
||||
</antcall>
|
||||
</target>
|
||||
<target name="echo-platform">
|
||||
<!-- Make output more readable -->
|
||||
|
||||
<!-- Boolean platform.os.foo value -->
|
||||
<condition property="os.echo" value="${prefix}.os.windows">
|
||||
<isset property="${prefix}.os.windows"/>
|
||||
</condition>
|
||||
<condition property="os.echo" value="${prefix}.os.mac">
|
||||
<isset property="${prefix}.os.mac"/>
|
||||
</condition>
|
||||
<property name="os.echo" value="${prefix}.os.linux" description="fallback value"/>
|
||||
|
||||
<!-- Boolean target.arch.foo value -->
|
||||
<condition property="arch.echo" value="${prefix}.arch.aarch64">
|
||||
<isset property="${prefix}.arch.aarch64"/>
|
||||
</condition>
|
||||
<property name="arch.echo" value="${prefix}.arch.x86_64" description="fallback value"/>
|
||||
|
||||
<echo level="info">
|
||||
${title} platform:
|
||||
${prefix}.os: "${prefix.os}"
|
||||
${prefix}.arch: "${prefix.arch}"
|
||||
${prefix}.libext: "${prefix.libext}"
|
||||
${os.echo}: true
|
||||
${arch.echo}: true
|
||||
</echo>
|
||||
|
||||
</target>
|
||||
|
||||
<!-- Force Linux runtime. Set by "makeself" target -->
|
||||
<target name="target-os-linux">
|
||||
<!-- String value -->
|
||||
<property name="target.os" value ="linux"/>
|
||||
<!-- Boolean value -->
|
||||
<property name="target.os.linux" value="true"/>
|
||||
</target>
|
||||
|
||||
<!-- Force Linux runtime. Set by "nsis" target -->
|
||||
<target name="target-os-windows">
|
||||
<!-- String value -->
|
||||
<property name="target.os" value ="windows"/>
|
||||
<!-- Boolean value -->
|
||||
<property name="target.os.windows" value="true"/>
|
||||
</target>
|
||||
|
||||
<!-- Force Linux runtime. Set by "pkgbuild", "dmg" targets -->
|
||||
<target name="target-os-mac">
|
||||
<!-- String value -->
|
||||
<property name="target.os" value ="mac"/>
|
||||
<!-- Boolean value -->
|
||||
<property name="target.os.mac" value="true"/>
|
||||
</target>
|
||||
|
||||
<target name="get-target-os" depends="get-host-os">
|
||||
<!-- Suppress property warning :) -->
|
||||
<condition description="suppress property warning (no-op)"
|
||||
property="target.os" value="${target.os}">
|
||||
<isset property="target.os"/>
|
||||
</condition>
|
||||
<!-- Set Boolean if only the String was set -->
|
||||
<condition property="target.os.windows">
|
||||
<and>
|
||||
<isset property="target.os"/>
|
||||
<equals arg1="${target.os}" arg2="windows"/>
|
||||
</and>
|
||||
</condition>
|
||||
<condition property="target.os.mac">
|
||||
<and>
|
||||
<isset property="target.os"/>
|
||||
<equals arg1="${target.os}" arg2="mac"/>
|
||||
</and>
|
||||
</condition>
|
||||
<condition property="target.os.linux">
|
||||
<and>
|
||||
<isset property="target.os"/>
|
||||
<equals arg1="${target.os}" arg2="linux"/>
|
||||
</and>
|
||||
</condition>
|
||||
|
||||
<!-- Fallback to host boolean values if target values aren't specified -->
|
||||
<property name="target.os" value="${host.os}" description="fallback value"/>
|
||||
<condition property="target.os.windows" description="fallback value">
|
||||
<equals arg1="${target.os}" arg2="windows"/>
|
||||
</condition>
|
||||
<condition property="target.os.mac" description="fallback value">
|
||||
<equals arg1="${target.os}" arg2="mac"/>
|
||||
</condition>
|
||||
<condition property="target.os.linux" description="fallback value">
|
||||
<equals arg1="${target.os}" arg2="linux"/>
|
||||
</condition>
|
||||
</target>
|
||||
|
||||
<!-- Calculate target architecture based on ${target.arch} value -->
|
||||
<target name="get-target-arch" depends="get-host-arch">
|
||||
<!-- Fallback to ${host.arch} if not specified -->
|
||||
<property name="target.arch" value="${host.arch}" description="fallback value"/>
|
||||
<condition property="target.arch.x86_64">
|
||||
<equals arg1="amd64" arg2="${target.arch}"/>
|
||||
</condition>
|
||||
<condition property="target.arch.x86_64">
|
||||
<equals arg1="x86_64" arg2="${target.arch}"/>
|
||||
</condition>
|
||||
<condition property="target.arch.aarch64">
|
||||
<equals arg1="aarch64" arg2="${target.arch}"/>
|
||||
</condition>
|
||||
<condition property="target.arch.riscv64">
|
||||
<equals arg1="riscv64" arg2="${target.arch}"/>
|
||||
</condition>
|
||||
<!-- Warning: Placeholder only! 32-bit builds are not supported -->
|
||||
<condition property="target.arch.arm32">
|
||||
<equals arg1="arm32" arg2="${target.arch}"/>
|
||||
</condition>
|
||||
<condition property="target.arch.x86">
|
||||
<equals arg1="x86" arg2="${target.arch}"/>
|
||||
</condition>
|
||||
</target>
|
||||
|
||||
<!-- Calculate native file extension -->
|
||||
<target name="get-libext" depends="get-host-os">
|
||||
<!-- Some constants -->
|
||||
<property name="windows.libext" value="dll"/>
|
||||
<property name="mac.libext" value="dylib"/>
|
||||
<property name="linux.libext" value="so"/>
|
||||
<!-- Host uses "dll" -->
|
||||
<condition property="host.libext" value="${windows.libext}">
|
||||
<isset property="host.os.windows"/>
|
||||
</condition>
|
||||
<!-- Host uses "dylib" -->
|
||||
<condition property="host.libext" value="${mac.libext}">
|
||||
<isset property="host.os.mac"/>
|
||||
</condition>
|
||||
<!-- Host uses "so" -->
|
||||
<condition property="host.libext" value="${linux.libext}">
|
||||
<isset property="host.os.linux"/>
|
||||
</condition>
|
||||
<!-- Target uses "dll" -->
|
||||
<condition property="target.libext" value="${windows.libext}">
|
||||
<isset property="target.os.windows"/>
|
||||
</condition>
|
||||
<!-- Target uses "dylib" -->
|
||||
<condition property="target.libext" value="${mac.libext}">
|
||||
<isset property="target.os.mac"/>
|
||||
</condition>
|
||||
<!-- Target uses "so" -->
|
||||
<condition property="target.libext" value="${linux.libext}">
|
||||
<isset property="target.os.linux"/>
|
||||
</condition>
|
||||
|
||||
<!-- Target uses "" or "lib" prefix for native files -->
|
||||
<condition property="host.libprefix" value="" else="lib">
|
||||
<isset property="host.os.windows"/>
|
||||
</condition>
|
||||
|
||||
<!-- Host uses "" or "lib" prefix for native files -->
|
||||
<condition property="target.libprefix" value="" else="lib">
|
||||
<isset property="target.os.windows"/>
|
||||
</condition>
|
||||
</target>
|
||||
|
||||
<!-- Calculate and standardize host architecture based on ${os.arch} value -->
|
||||
<target name="get-host-arch">
|
||||
<!-- Boolean value (x86_64) -->
|
||||
<condition property="host.arch.x86_64">
|
||||
<equals arg1="amd64" arg2="${os.arch}"/>
|
||||
</condition>
|
||||
<condition property="host.arch.x86_64">
|
||||
<equals arg1="x86_64" arg2="${os.arch}"/>
|
||||
</condition>
|
||||
|
||||
<!-- Boolean value (aarch64) -->
|
||||
<condition property="host.arch.aarch64">
|
||||
<equals arg1="aarch64" arg2="${os.arch}"/>
|
||||
</condition>
|
||||
|
||||
<!-- Boolean value (x86 - unsupported) -->
|
||||
<condition property="host.arch.x86">
|
||||
<equals arg1="x86" arg2="${os.arch}"/>
|
||||
</condition>
|
||||
|
||||
<!-- String value (aarch64) -->
|
||||
<condition property="host.arch" value="aarch64">
|
||||
<equals arg1="aarch64" arg2="${os.arch}"/>
|
||||
</condition>
|
||||
<!-- String value (x86) -->
|
||||
<condition property="host.arch" value="x86">
|
||||
<equals arg1="x86" arg2="${os.arch}"/>
|
||||
</condition>
|
||||
<condition property="host.arch" value="x86">
|
||||
<equals arg1="i386" arg2="${os.arch}"/>
|
||||
</condition>
|
||||
|
||||
<!-- String value (x86_64 - fallback, most common) -->
|
||||
<property name="host.arch" value="x86_64" description="fallback value"/>
|
||||
</target>
|
||||
|
||||
<!-- Calculate the host os -->
|
||||
<target name="get-host-os">
|
||||
<!-- Boolean value -->
|
||||
<condition property="host.os.windows" value="true">
|
||||
<os family="windows"/>
|
||||
</condition>
|
||||
<condition property="host.os.mac" value="true">
|
||||
<os family="mac"/>
|
||||
</condition>
|
||||
<condition property="host.os.linux" value="true">
|
||||
<and>
|
||||
<os family="unix"/>
|
||||
<not>
|
||||
<os family="mac"/>
|
||||
</not>
|
||||
</and>
|
||||
</condition>
|
||||
|
||||
<!-- String value -->
|
||||
<condition property="host.os" value="windows">
|
||||
<os family="windows"/>
|
||||
</condition>
|
||||
<condition property="host.os" value="mac">
|
||||
<os family="mac"/>
|
||||
</condition>
|
||||
<property name="host.os" value="linux" description="fallback value"/>
|
||||
</target>
|
||||
</project>
|
||||
@@ -1,5 +0,0 @@
|
||||
signing.alias=self-signed
|
||||
signing.keystore=ant/private/qz.ks
|
||||
signing.keypass=jzebraonfire
|
||||
signing.storepass=jzebraonfire
|
||||
signing.algorithm=SHA-256
|
||||
@@ -1,62 +0,0 @@
|
||||
vendor.name=qz
|
||||
vendor.company=QZ Industries, LLC
|
||||
vendor.website=https://qz.io
|
||||
vendor.email=support@qz.io
|
||||
|
||||
project.name=QZ Tray
|
||||
project.filename=qz-tray
|
||||
project.datadir=qz
|
||||
|
||||
install.opts=-Djna.nosys=true
|
||||
launch.opts=-Xms512m ${install.opts}
|
||||
install.log=/tmp/${project.datadir}-install.log
|
||||
# jdk9+ flags
|
||||
# - Dark theme requires workaround https://github.com/bobbylight/Darcula/issues/8
|
||||
launch.jigsaw=--add-exports java.desktop/sun.swing=ALL-UNNAMED
|
||||
launch.overrides=QZ_OPTS
|
||||
|
||||
src.dir=${basedir}/src
|
||||
out.dir=${basedir}/out
|
||||
build.dir=${out.dir}/build
|
||||
dist.dir=${out.dir}/dist
|
||||
|
||||
sign.lib.dir=${out.dir}/jar-signed
|
||||
|
||||
jar.compress=true
|
||||
jar.index=true
|
||||
|
||||
# Separate native lib resources from jars
|
||||
separate.static.libs=true
|
||||
|
||||
# See also qz.common.Constants.java
|
||||
javac.source=11
|
||||
javac.target=11
|
||||
java.download=https://bell-sw.com/pages/downloads/#/java-11-lts
|
||||
|
||||
# Java vendor to bundle into software (e.g. "*BellSoft|Adoptium|Microsoft|Amazon|IBM")
|
||||
jlink.java.vendor="BellSoft"
|
||||
# Java vendor to bundle into software (e.g. "11.0.17+7")
|
||||
jlink.java.version="11.0.27+9"
|
||||
# Java garbage collector flavor to use (e.g. "hotspot|openj9")
|
||||
jlink.java.gc="hotspot"
|
||||
# Java garbage collector version to use (e.g. openj9: "0.35.0", zulu: "11.62.17")
|
||||
jlink.java.gc.version="gc-ver-is-empty"
|
||||
# Bundle a locally built copy of Java instead
|
||||
jlink.java.target=/home/ske087/quality_recticel/jdk-11.0.20-full
|
||||
|
||||
# Skip bundling the java runtime
|
||||
jre.skip=false
|
||||
|
||||
# JavaFX version
|
||||
javafx.version=19_monocle
|
||||
javafx.mirror=https://download2.gluonhq.com/openjfx
|
||||
|
||||
# Provisioning
|
||||
# provision.file=${basedir}/provision.json
|
||||
provision.dir=${dist.dir}/provision
|
||||
|
||||
# Mask tray toggle (Apple only)
|
||||
java.mask.tray=true
|
||||
|
||||
# Workaround to delay expansion of $${foo} (e.g. shell scripts)
|
||||
dollar=$
|
||||
@@ -1,196 +0,0 @@
|
||||
<project name="signing-helpers" basedir="../">
|
||||
<property file="ant/project.properties"/>
|
||||
|
||||
<!-- Custom code-signing properties -->
|
||||
<property file="${basedir}/../private/private.properties"/>
|
||||
|
||||
<!-- Fallback code-signing properties -->
|
||||
<property file="ant/private/private.properties"/>
|
||||
|
||||
<!-- Locate first jsign-x.x.x.jar sorted name desc -->
|
||||
<target name="find-jsign">
|
||||
<sort id="jsign.sorted">
|
||||
<fileset dir="${basedir}/ant/lib/">
|
||||
<include name="jsign*.jar"/>
|
||||
</fileset>
|
||||
<reverse xmlns="antlib:org.apache.tools.ant.types.resources.comparators"/>
|
||||
</sort>
|
||||
<first id="jsign.first">
|
||||
<resources refid="jsign.sorted"/>
|
||||
</first>
|
||||
<pathconvert property="jsign.path" refid="jsign.first">
|
||||
<identitymapper/>
|
||||
</pathconvert>
|
||||
|
||||
<echo message="Found jsign: ${jsign.path}"/>
|
||||
</target>
|
||||
|
||||
<!-- File signing -->
|
||||
<target name="sign-file">
|
||||
<!-- Self-sign -->
|
||||
<antcall target="sign-file-self">
|
||||
<param name="sign.file" value="${sign.file}"/>
|
||||
</antcall>
|
||||
|
||||
<!-- EV-sign using HSM -->
|
||||
<antcall target="sign-file-hsm">
|
||||
<param name="sign.file" value="${sign.file}"/>
|
||||
</antcall>
|
||||
</target>
|
||||
|
||||
<!-- Jar signing -->
|
||||
<target name="sign-jar">
|
||||
<!-- Self-sign -->
|
||||
<antcall target="sign-jar-self">
|
||||
<param name="sign.file" value="${sign.file}"/>
|
||||
</antcall>
|
||||
|
||||
<!-- EV-sign using HSM -->
|
||||
<antcall target="sign-jar-hsm">
|
||||
<param name="sign.file" value="${sign.file}"/>
|
||||
</antcall>
|
||||
</target>
|
||||
|
||||
<!-- File signing via hsm with timestamp -->
|
||||
<target name="sign-file-hsm" if="hsm.storetype" depends="find-jsign">
|
||||
<echo level="info">Signing with hsm: ${hsm.keystore}</echo>
|
||||
<java jar="${jsign.path}" fork="true" failonerror="true">
|
||||
<arg value="--name"/>
|
||||
<arg value="${project.name}"/>
|
||||
<arg value="--url"/>
|
||||
<arg value="${vendor.website}"/>
|
||||
<arg value="--replace"/>
|
||||
<arg value="--alg"/>
|
||||
<arg value="${hsm.algorithm}"/>
|
||||
<arg value="--storetype"/>
|
||||
<arg value="${hsm.storetype}"/>
|
||||
<arg value="--keystore"/>
|
||||
<arg value="${hsm.keystore}"/>
|
||||
<arg value="--alias"/>
|
||||
<arg value="${hsm.alias}"/>
|
||||
<arg value="--storepass"/>
|
||||
<arg value="${hsm.storepass}"/>
|
||||
<arg value="--tsaurl"/>
|
||||
<arg value="${hsm.tsaurl}"/>
|
||||
<arg value="--certfile"/>
|
||||
<arg value="${hsm.certfile}"/>
|
||||
<arg line="${sign.file}"/>
|
||||
</java>
|
||||
</target>
|
||||
|
||||
<!-- Jar signing via hsm with timestamp -->
|
||||
<target name="sign-jar-hsm" if="hsm.storetype" depends="find-jsign,get-jar-alg">
|
||||
<signjar providerclass="net.jsign.jca.JsignJcaProvider"
|
||||
providerarg="${hsm.keystore}"
|
||||
alias="${hsm.alias}"
|
||||
storepass="${hsm.storepass}"
|
||||
storetype="${hsm.storetype}"
|
||||
keystore="NONE"
|
||||
sigalg="${jar.sigalg}"
|
||||
digestalg="${jar.digestalg}"
|
||||
tsaurl="${hsm.tsaurl}"
|
||||
jar="${sign.file}"
|
||||
signedjar="${sign.file}">
|
||||
<!-- special args needed by jsign -->
|
||||
<arg value="-J-cp"/><arg value="-J${jsign.path}"/>
|
||||
<arg value="-J--add-modules"/><arg value="-Jjava.sql"/>
|
||||
<arg value="-certchain"/><arg file="${hsm.certfile}"/>
|
||||
</signjar>
|
||||
</target>
|
||||
|
||||
<!-- File signing via arbitrary key without timestamp -->
|
||||
<target name="sign-file-self" unless="hsm.storetype" depends="find-jsign,find-keystore-self">
|
||||
<echo level="info">Signing without timestamp:</echo>
|
||||
<tsa-warning/>
|
||||
<java jar="${jsign.path}" fork="true" failonerror="true">
|
||||
<arg value="--name"/>
|
||||
<arg value="${project.name}"/>
|
||||
<arg value="--url"/>
|
||||
<arg value="${vendor.website}"/>
|
||||
<arg value="--replace"/>
|
||||
<arg value="--alg"/>
|
||||
<arg value="${signing.algorithm}"/>
|
||||
<arg value="--keystore"/>
|
||||
<arg value="${signing.keystore}"/>
|
||||
<arg value="--alias"/>
|
||||
<arg value="${signing.alias}"/>
|
||||
<arg value="--storepass"/>
|
||||
<arg value="${signing.storepass}"/>
|
||||
<arg value="--keypass"/>
|
||||
<arg value="${signing.keypass}"/>
|
||||
<arg line="${sign.file}"/>
|
||||
</java>
|
||||
</target>
|
||||
|
||||
<!-- Jar signing via arbitrary key without timestamp -->
|
||||
<target name="sign-jar-self" unless="hsm.storetype" depends="find-jsign,find-keystore-self,get-jar-alg">
|
||||
<signjar alias="${signing.alias}"
|
||||
storepass="${signing.storepass}"
|
||||
keystore="${signing.keystore}"
|
||||
keypass="${signing.keypass}"
|
||||
sigalg="${jar.sigalg}"
|
||||
digestalg="${jar.digestalg}"
|
||||
jar="${sign.file}"
|
||||
signedjar="${sign.file}"
|
||||
/>
|
||||
</target>
|
||||
|
||||
<!-- Maps jsign algorithm to jarsigner algorithm -->
|
||||
<target name="get-jar-alg">
|
||||
<!-- Populate from hsm.algorithm or signing.algorithm -->
|
||||
<condition property="jar.algorithm" value="${hsm.algorithm}">
|
||||
<isset property="${hsm.algorithm}"/>
|
||||
</condition>
|
||||
<property name="jar.algorithm" value="${signing.algorithm}" description="fallback value"/>
|
||||
|
||||
<!-- Convert "SHA-256" to "SHA256", etc -->
|
||||
<loadresource property="convert.algorithm">
|
||||
<propertyresource name="jar.algorithm"/>
|
||||
<filterchain>
|
||||
<tokenfilter>
|
||||
<filetokenizer/>
|
||||
<replacestring from="-" to=""/>
|
||||
</tokenfilter>
|
||||
</filterchain>
|
||||
</loadresource>
|
||||
<property name="convert.algorithm" value="something went wrong" description="fallback value"/>
|
||||
|
||||
<!-- e.g. "SHA256withRSA" -->
|
||||
<property description="Signature Algorithm" name="jar.sigalg" value="${convert.algorithm}withRSA"/>
|
||||
|
||||
<!-- e.g. "SHA256" -->
|
||||
<property description="Digest Algorithm" name="jar.digestalg" value="${convert.algorithm}"/>
|
||||
</target>
|
||||
|
||||
<target name="find-keystore-self">
|
||||
<available file="${signing.keystore}" property="keystore.exists"/>
|
||||
<antcall target="generate-keystore-self"/>
|
||||
</target>
|
||||
|
||||
<target name="generate-keystore-self" unless="keystore.exists">
|
||||
<genkey
|
||||
alias="${signing.alias}"
|
||||
keyalg="RSA"
|
||||
keysize="2048"
|
||||
keystore="${signing.keystore}"
|
||||
storepass="${signing.storepass}"
|
||||
validity="3650"
|
||||
verbose="true">
|
||||
<dname>
|
||||
<param name="CN" value="${vendor.company} (self-signed)"/>
|
||||
<param name="OU" value="${project.name}"/>
|
||||
<param name="O" value="${vendor.website}"/>
|
||||
<param name="C" value="US"/>
|
||||
</dname>
|
||||
</genkey>
|
||||
</target>
|
||||
|
||||
<macrodef name="tsa-warning">
|
||||
<sequential>
|
||||
<echo level="warn">
|
||||
No tsaurl was provided so the file was not timestamped. Users will not be able to validate
|
||||
this file after the signer certificate's expiration date or after any future revocation date.
|
||||
</echo>
|
||||
</sequential>
|
||||
</macrodef>
|
||||
</project>
|
||||
@@ -1,138 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Shared launcher for MacOS and Linux
|
||||
# Parameters -- if any -- are passed on to the app
|
||||
|
||||
# Halt on first error
|
||||
set -e
|
||||
|
||||
# Configured by ant at build time
|
||||
JAVA_MIN="${javac.target}"
|
||||
LAUNCH_OPTS="${launch.opts}"
|
||||
ABOUT_TITLE="${project.name}"
|
||||
PROPS_FILE="${project.filename}"
|
||||
|
||||
# Get working directory
|
||||
DIR=$(cd "$(dirname "$0")" && pwd)
|
||||
pushd "$DIR" &> /dev/null
|
||||
|
||||
# Console colors
|
||||
RED="\\x1B[1;31m";GREEN="\\x1B[1;32m";YELLOW="\\x1B[1;33m";PLAIN="\\x1B[0m"
|
||||
|
||||
# Statuses
|
||||
SUCCESS=" [${GREEN}success${PLAIN}]"
|
||||
FAILURE=" [${RED}failure${PLAIN}]"
|
||||
WARNING=" [${YELLOW}warning${PLAIN}]"
|
||||
MESSAGE=" [${YELLOW}message${PLAIN}]"
|
||||
|
||||
echo "Looking for Java..."
|
||||
|
||||
# Honor JAVA_HOME
|
||||
if [ -n "$JAVA_HOME" ]; then
|
||||
echo -e "$WARNING JAVA_HOME was detected, using $JAVA_HOME..."
|
||||
PATH="$JAVA_HOME/bin:$PATH"
|
||||
fi
|
||||
|
||||
# Always prefer relative runtime/jre
|
||||
if [[ "$DIR" == *"/Contents/MacOS"* ]]; then
|
||||
PATH="$DIR/../PlugIns/Java.runtime/Contents/Home/bin:$PATH"
|
||||
else
|
||||
PATH="$DIR/runtime/bin:$DIR/jre/bin:$PATH"
|
||||
fi
|
||||
|
||||
# Check for user overridable launch options
|
||||
if [ -n "${dollar}${launch.overrides}" ]; then
|
||||
echo -e "$MESSAGE Picked up additional launch options: ${dollar}${launch.overrides}"
|
||||
LAUNCH_OPTS="$LAUNCH_OPTS ${dollar}${launch.overrides}"
|
||||
fi
|
||||
|
||||
# Fallback on some known locations
|
||||
if ! command -v java > /dev/null ; then
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
# Apple: Fallback on system-wide install
|
||||
DEFAULTS_READ=$(defaults read ${apple.bundleid} ${launch.overrides} 2>/dev/null) || true
|
||||
if [ -n "$DEFAULTS_READ" ]; then
|
||||
echo -e "$MESSAGE Picked up additional launch options: $DEFAULTS_READ"
|
||||
LAUNCH_OPTS="$LAUNCH_OPTS $DEFAULTS_READ"
|
||||
fi
|
||||
MAC_PRIMARY="/usr/libexec/java_home"
|
||||
MAC_FALLBACK="/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin"
|
||||
echo "Trying $MAC_PRIMARY..."
|
||||
if "$MAC_PRIMARY" -v $JAVA_MIN+ &>/dev/null; then
|
||||
echo -e "$SUCCESS Using \"$MAC_PRIMARY -v $JAVA_MIN+ --exec\" to launch $ABOUT_TITLE"
|
||||
java() {
|
||||
"$MAC_PRIMARY" -v $JAVA_MIN+ --exec java "$@"
|
||||
}
|
||||
elif [ -d "/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin" ]; then
|
||||
echo -e "$WARNING No luck using $MAC_PRIMARY"
|
||||
echo "Trying $MAC_FALLBACK..."
|
||||
java() {
|
||||
"$MAC_FALLBACK/java" "$@"
|
||||
}
|
||||
fi
|
||||
else
|
||||
# Linux/Unix: Fallback on known install location(s)
|
||||
PATH="$PATH:/usr/java/latest/bin/"
|
||||
fi
|
||||
fi
|
||||
|
||||
if command -v java > /dev/null ; then
|
||||
echo -e "$SUCCESS Java was found: $(command -v java)"
|
||||
else
|
||||
echo -e "$FAILURE Please install Java $JAVA_MIN or higher to continue"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify the bundled Java version actually works
|
||||
if test -f "$DIR/runtime/bin/java" ; then
|
||||
echo "Verifying the bundled Java version can run on this platform..."
|
||||
if "$DIR/runtime/bin/java" -version &> /dev/null ; then
|
||||
echo -e "$SUCCESS Bundled Java version is OK"
|
||||
else
|
||||
echo -e "$FAILURE Sorry, this version of $ABOUT_TITLE cannot be installed on this system:\n"
|
||||
file "$DIR/runtime/bin/java"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Make sure Java version is sufficient
|
||||
echo "Verifying the Java version is $JAVA_MIN+..."
|
||||
curver=$(java -version 2>&1 | grep -i version | awk -F"\"" '{ print $2 }' | awk -F"." '{ print $1 "." $2 }')
|
||||
minver="$JAVA_MIN"
|
||||
if [ -z "$curver" ]; then
|
||||
curver="0.0"
|
||||
fi
|
||||
desired=$(echo -e "$minver\n$curver")
|
||||
actual=$(echo "$desired" |sort -t '.' -k 1,1 -k 2,2 -n)
|
||||
if [ "$desired" != "$actual" ]; then
|
||||
echo -e "$FAILURE Please install Java $JAVA_MIN or higher to continue"
|
||||
exit 1
|
||||
else
|
||||
echo -e "$SUCCESS Java $curver was detected"
|
||||
fi
|
||||
|
||||
jigsaw=$(echo -e "9.0\n$curver")
|
||||
actual=$(echo "$jigsaw" |sort -t '.' -k 1,1 -k 2,2 -n)
|
||||
if [ "$jigsaw" != "$actual" ]; then
|
||||
echo -e "$SUCCESS Java < 9.0, skipping jigsaw options"
|
||||
else
|
||||
echo -e "$SUCCESS Java >= 9.0, adding jigsaw options"
|
||||
LAUNCH_OPTS="$LAUNCH_OPTS ${launch.jigsaw}"
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
LAUNCH_OPTS="$LAUNCH_OPTS ${apple.launch.jigsaw}"
|
||||
else
|
||||
LAUNCH_OPTS="$LAUNCH_OPTS ${linux.launch.jigsaw}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if command -v java &>/dev/null; then
|
||||
echo -e "$ABOUT_TITLE is starting..."
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
java $LAUNCH_OPTS -Xdock:name="$ABOUT_TITLE" -Xdock:icon="$DIR/../Resources/$PROPS_FILE.icns" -jar -Dapple.awt.UIElement="true" -Dapple.awt.enableTemplateImages="${java.mask.tray}" -Dapple.awt.application.appearance="system" "$DIR/../Resources/${prefix}$PROPS_FILE.jar" -NSRequiresAquaSystemAppearance False "$@"
|
||||
else
|
||||
java $LAUNCH_OPTS -jar "$PROPS_FILE.jar" "$@"
|
||||
fi
|
||||
else
|
||||
echo -e "$FAILURE Java $JAVA_MIN+ was not found"
|
||||
fi
|
||||
|
||||
popd &>/dev/null
|
||||
@@ -1,38 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Halt on first error
|
||||
set -e
|
||||
|
||||
if [ "$(id -u)" != "0" ]; then
|
||||
echo "This script must be run with root (sudo) privileges" 1>&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get working directory
|
||||
DIR=$(cd "$(dirname "$0")" && pwd)
|
||||
pushd "$DIR"
|
||||
|
||||
echo "Running uninstall tasks..."
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
# Uninstall script is in "QZ Tray.app/Contents/Resources/uninstall"
|
||||
# Calculate the path to "QZ Tray.app"
|
||||
APP_DIR=$(cd "$(dirname "$0")/../.." && pwd)
|
||||
|
||||
if [[ "$APP_DIR" != *".app" ]]; then
|
||||
echo -e "\nMalformed app directory. Uninstallation of ${project.name} failed.\n"
|
||||
exit 1
|
||||
fi
|
||||
# Launcher script is in "QZ Tray.app/Contents/MacOS"
|
||||
"$APP_DIR/Contents/MacOS/${project.name}" uninstall
|
||||
else
|
||||
# Uninstall script is in root of app (e.g. "/opt/qz-tray")
|
||||
APP_DIR="$DIR"
|
||||
# Launcher script is adjacent to uninstall script
|
||||
"$APP_DIR/${project.filename}" uninstall
|
||||
fi
|
||||
|
||||
echo "Deleting files..."
|
||||
rm -rf "$APP_DIR"
|
||||
echo -e "\nUninstall of ${project.name} complete.\n"
|
||||
|
||||
popd &>/dev/null
|
||||
@@ -1,23 +0,0 @@
|
||||
<project name="version" basedir="../">
|
||||
<!-- Get version information from JAR -->
|
||||
<target name="get-version">
|
||||
<!-- build.version -->
|
||||
<property file="${basedir}/ant/project.properties"/>
|
||||
<java jar="${dist.dir}/${project.filename}.jar" fork="true" outputproperty="build.version" errorproperty="build.version.error" timeout="60000" failonerror="true">
|
||||
<arg value="--version"/>
|
||||
</java>
|
||||
|
||||
<!-- apple.bundleid -->
|
||||
<java jar="${dist.dir}/${project.filename}.jar" fork="true" outputproperty="apple.bundleid" errorproperty="apple.bundleid.error" timeout="60000" failonerror="true">
|
||||
<arg value="--bundleid"/>
|
||||
</java>
|
||||
<property description="fallback value" name="build.type" value=""/>
|
||||
<property description="fallback value" name="build.version" value=""/>
|
||||
<property description="fallback value" name="apple.bundleid" value=""/>
|
||||
|
||||
<echo level="info">
|
||||
Version : ${build.version}${build.type}
|
||||
Bundle Id : ${apple.bundleid}
|
||||
</echo>
|
||||
</target>
|
||||
</project>
|
||||
@@ -1,110 +0,0 @@
|
||||
<project name="windows-installer" basedir="../../">
|
||||
<property file="ant/project.properties"/>
|
||||
<import file="${basedir}/ant/version.xml"/>
|
||||
<import file="${basedir}/ant/platform-detect.xml"/>
|
||||
<import file="${basedir}/ant/signing.xml"/>
|
||||
<property environment="env"/>
|
||||
|
||||
<target name="build-exe" depends="get-version,platform-detect">
|
||||
<!-- Get the os-preferred name for the target architecture -->
|
||||
<condition property="windows.target.arch" value="arm64">
|
||||
<isset property="target.arch.aarch64"/>
|
||||
</condition>
|
||||
<property name="windows.target.arch" value="x86_64" description="fallback value"/>
|
||||
|
||||
<!-- Sign Libs and Runtime -->
|
||||
<fileset dir="${dist.dir}/" id="win.sign.found">
|
||||
<include name="**/*.dll"/>
|
||||
<include name="**/*.exe"/>
|
||||
</fileset>
|
||||
<!-- Pass all files at once, wrapped in quotes -->
|
||||
<pathconvert pathsep="" "" property="win.sign.separated" refid="win.sign.found"/>
|
||||
<antcall target="sign-file">
|
||||
<param name="sign.file" value=""${win.sign.separated}""/>
|
||||
</antcall>
|
||||
|
||||
<!-- Launcher -->
|
||||
<antcall target="config-compile-sign">
|
||||
<param name="nsis.script.in" value="windows-launcher.nsi.in"/>
|
||||
<param name="nsis.outfile" value="${dist.dir}/${project.filename}.exe"/>
|
||||
</antcall>
|
||||
|
||||
<!-- Debug Launcher -->
|
||||
<copy file="ant/windows/windows-launcher.nsi.in" tofile="ant/windows/windows-debug-launcher.nsi.in" overwrite="true"/>
|
||||
<replace file="ant/windows/windows-debug-launcher.nsi.in" token="$javaw" value="$java"/>
|
||||
<replace file="ant/windows/windows-debug-launcher.nsi.in" token="/assets/branding/windows-icon.ico" value="/ant/windows/nsis/console.ico"/>
|
||||
<antcall target="config-compile-sign">
|
||||
<param name="nsis.script.in" value="windows-debug-launcher.nsi.in"/>
|
||||
<param name="nsis.outfile" value="${dist.dir}/${project.filename}-console.exe"/>
|
||||
</antcall>
|
||||
|
||||
<!-- Uninstaller -->
|
||||
<antcall target="config-compile-sign">
|
||||
<param name="nsis.script.in" value="windows-uninstaller.nsi.in"/>
|
||||
<param name="nsis.outfile" value="${dist.dir}/uninstall.exe"/>
|
||||
</antcall>
|
||||
|
||||
<!-- Installer (bundles dist/ payload) -->
|
||||
<antcall target="config-compile-sign">
|
||||
<param name="nsis.script.in" value="windows-installer.nsi.in"/>
|
||||
<param name="nsis.outfile" value="${out.dir}/${project.filename}${build.type}-${build.version}-${windows.target.arch}.exe"/>
|
||||
</antcall>
|
||||
</target>
|
||||
|
||||
<target name="config-compile-sign" depends="find-nsisbin">
|
||||
<echo level="info">Creating ${nsis.outfile} using ${nsisbin}</echo>
|
||||
|
||||
<!-- Calculate file name without suffix -->
|
||||
<basename property="nsis.script.out" file="${nsis.script.in}" suffix=".in"/>
|
||||
|
||||
<!-- Configure the nsi script with ant parameters -->
|
||||
<copy file="ant/windows/${nsis.script.in}" tofile="${build.dir}/${nsis.script.out}" overwrite="true">
|
||||
<filterchain><expandproperties/></filterchain>
|
||||
</copy>
|
||||
|
||||
<!-- Create the exe -->
|
||||
<exec executable="${nsisbin}" failonerror="true">
|
||||
<arg value="${build.dir}/${nsis.script.out}"/>
|
||||
</exec>
|
||||
|
||||
<!-- Sign the exe -->
|
||||
<antcall target="sign-file">
|
||||
<param name="sign.file" value="${nsis.outfile}"/>
|
||||
</antcall>
|
||||
</target>
|
||||
|
||||
<target name="find-nsisbin" depends="nsisbin-from-unix,nsisbin-from-32,nsisbin-from-64"/>
|
||||
|
||||
<!-- Linux makensis -->
|
||||
<target name="nsisbin-from-unix" unless="env.windir">
|
||||
<property name="nsisbin" value="makensis"/>
|
||||
</target>
|
||||
|
||||
<!-- Win32 makensis -->
|
||||
<target name="nsisbin-from-32" unless="env.ProgramFiles(x86)">
|
||||
<property description="suppress property warning" name="env.ProgramFiles" value="C:/Program Files"/>
|
||||
<property name="nsisbin" value="${env.ProgramFiles}/NSIS/makensis.exe"/>
|
||||
</target>
|
||||
|
||||
<!-- Win64 makensis -->
|
||||
<target name="nsisbin-from-64" if="env.ProgramFiles(x86)">
|
||||
<property description="suppress property warning" name="env.ProgramFiles(x86)" value="C:/Program Files (x86)"/>
|
||||
<property name="nsisbin" value="${env.ProgramFiles(x86)}/NSIS/makensis.exe"/>
|
||||
</target>
|
||||
|
||||
<target name="copy-dlls" if="target.os.windows">
|
||||
<echo level="info">Copying native library files to libs</echo>
|
||||
<copy todir="${dist.dir}/libs" flatten="true" verbose="true">
|
||||
<fileset dir="${out.dir}/libs-temp">
|
||||
<!--x86_64-->
|
||||
<include name="**/win32-x86-64/*" if="target.arch.x86_64"/> <!-- jna/hid4java -->
|
||||
<include name="**/windows-x86_64/*" if="target.arch.x86_64"/> <!-- usb4java -->
|
||||
<include name="**/windows_64/*" if="target.arch.x86_64"/> <!-- jssc -->
|
||||
<!--aarch64-->
|
||||
<include name="**/win32-aarch64/*" if="target.arch.aarch64"/> <!-- jna/hid4java -->
|
||||
<include name="**/windows-aarch64/*" if="target.arch.aarch64"/> <!-- usb4java -->
|
||||
<include name="**/windows_arm64/*" if="target.arch.aarch64"/> <!-- jssc -->
|
||||
</fileset>
|
||||
</copy>
|
||||
</target>
|
||||
</project>
|
||||
@@ -1,143 +0,0 @@
|
||||
!include FileFunc.nsh
|
||||
!include LogicLib.nsh
|
||||
!include x64.nsh
|
||||
|
||||
!include StrRep.nsh
|
||||
!include IndexOf.nsh
|
||||
!include StrTok.nsh
|
||||
|
||||
; Resulting variable
|
||||
Var /GLOBAL java
|
||||
Var /GLOBAL javaw
|
||||
Var /GLOBAL java_major
|
||||
|
||||
; Constants
|
||||
!define EXE "java.exe"
|
||||
|
||||
!define ADOPT "SOFTWARE\Classes\AdoptOpenJDK.jarfile\shell\open\command"
|
||||
!define ECLIPSE "SOFTWARE\Classes\Eclipse Adoptium.jarfile\shell\open\command"
|
||||
!define ECLIPSE_OLD "SOFTWARE\Classes\Eclipse Foundation.jarfile\shell\open\command"
|
||||
|
||||
!define JRE "Software\JavaSoft\Java Runtime Environment"
|
||||
!define JRE32 "Software\Wow6432Node\JavaSoft\Java Runtime Environment"
|
||||
!define JDK "Software\JavaSoft\JDK"
|
||||
!define JDK32 "Software\Wow6432Node\JavaSoft\JDK"
|
||||
|
||||
; Macros
|
||||
!macro _ReadEclipseKey
|
||||
ClearErrors
|
||||
ReadRegStr $0 HKLM "${ECLIPSE}" ""
|
||||
StrCpy $0 "$0" "" 1 ; Remove first double-quote
|
||||
${IndexOf} $1 $0 "$\"" ; Find the index of second double-quote
|
||||
StrCpy $0 "$0" $1 ; Get the string section up to the index
|
||||
IfFileExists "$0" Found
|
||||
!macroend
|
||||
|
||||
!macro _ReadEclipseOldKey
|
||||
ClearErrors
|
||||
ReadRegStr $0 HKLM "${ECLIPSE_OLD}" ""
|
||||
StrCpy $0 "$0" "" 1 ; Remove first double-quote
|
||||
${IndexOf} $1 $0 "$\"" ; Find the index of second double-quote
|
||||
StrCpy $0 "$0" $1 ; Get the string section up to the index
|
||||
IfFileExists "$0" Found
|
||||
!macroend
|
||||
|
||||
!macro _ReadAdoptKey
|
||||
ClearErrors
|
||||
ReadRegStr $0 HKLM "${ADOPT}" ""
|
||||
StrCpy $0 "$0" "" 1 ; Remove first double-quote
|
||||
${IndexOf} $1 $0 "$\"" ; Find the index of second double-quote
|
||||
StrCpy $0 "$0" $1 ; Get the string section up to the index
|
||||
IfFileExists "$0" Found
|
||||
!macroend
|
||||
|
||||
!macro _ReadReg key
|
||||
ClearErrors
|
||||
ReadRegStr $0 HKLM "${key}" "CurrentVersion"
|
||||
ReadRegStr $0 HKLM "${key}\$0" "JavaHome"
|
||||
IfErrors +2 0
|
||||
StrCpy $0 "$0\bin\${EXE}"
|
||||
IfFileExists "$0" Found
|
||||
!macroend
|
||||
|
||||
!macro _ReadPayload root path
|
||||
ClearErrors
|
||||
StrCpy $0 "${root}\${path}\bin\${EXE}"
|
||||
IfFileExists $0 Found
|
||||
!macroend
|
||||
|
||||
!macro _ReadWorking path
|
||||
ClearErrors
|
||||
StrCpy $0 "$EXEDIR\${path}\bin\${EXE}"
|
||||
IfFileExists $0 Found
|
||||
!macroend
|
||||
|
||||
!macro _ReadEnv var
|
||||
ClearErrors
|
||||
ReadEnvStr $0 "${var}"
|
||||
StrCpy $0 "$0\bin\${EXE}"
|
||||
IfFileExists "$0" Found
|
||||
!macroend
|
||||
|
||||
; Create the shared function.
|
||||
!macro _FindJava un
|
||||
Function ${un}FindJava
|
||||
; Snag payload directory off the stack
|
||||
exch $R0
|
||||
|
||||
${If} ${RunningX64}
|
||||
SetRegView 64
|
||||
${EndIf}
|
||||
|
||||
; Check payload directories
|
||||
!insertmacro _ReadPayload "$R0" "runtime"
|
||||
|
||||
; Check relative directories
|
||||
!insertmacro _ReadWorking "runtime"
|
||||
!insertmacro _ReadWorking "jre"
|
||||
|
||||
; Check common env vars
|
||||
!insertmacro _ReadEnv "JAVA_HOME"
|
||||
|
||||
; Check registry
|
||||
!insertmacro _ReadEclipseKey
|
||||
!insertmacro _ReadEclipseOldKey
|
||||
!insertmacro _ReadAdoptKey
|
||||
!insertmacro _ReadReg "${JRE}"
|
||||
!insertmacro _ReadReg "${JRE32}"
|
||||
!insertmacro _ReadReg "${JDK}"
|
||||
!insertmacro _ReadReg "${JDK32}"
|
||||
|
||||
; Give up. Use java.exe and hope it works
|
||||
StrCpy $0 "${EXE}"
|
||||
|
||||
; Set global var
|
||||
Found:
|
||||
StrCpy $java $0
|
||||
${StrRep} '$java' '$java' 'javaw.exe' '${EXE}' ; AdoptOpenJDK returns "javaw.exe"
|
||||
${StrRep} '$javaw' '$java' '${EXE}' 'javaw.exe'
|
||||
|
||||
; Discard payload directory
|
||||
pop $R0
|
||||
|
||||
; Detect java version
|
||||
nsExec::ExecToStack '"$java" -version'
|
||||
Pop $0
|
||||
Pop $1
|
||||
; Isolate version number, e.g. "1.8.0"
|
||||
${StrTok} $0 "$1" "$\"" "1" "1"
|
||||
; Isolate major version
|
||||
${StrTok} $R0 "$0" "." "0" "1"
|
||||
; Handle old 1.x.x version format
|
||||
${If} "$R0" == "1"
|
||||
${StrTok} $R0 "$0" "." "1" "1"
|
||||
${EndIf}
|
||||
|
||||
; Convert to integer
|
||||
IntOp $java_major $R0 + 0
|
||||
FunctionEnd
|
||||
!macroend
|
||||
|
||||
; Allows registering identical functions for install and uninstall
|
||||
!insertmacro _FindJava ""
|
||||
;!insertmacro _FindJava "un."
|
||||
@@ -1,28 +0,0 @@
|
||||
!define IndexOf "!insertmacro IndexOf"
|
||||
|
||||
!macro IndexOf Var Str Char
|
||||
Push "${Char}"
|
||||
Push "${Str}"
|
||||
|
||||
Exch $R0
|
||||
Exch
|
||||
Exch $R1
|
||||
Push $R2
|
||||
Push $R3
|
||||
|
||||
StrCpy $R3 $R0
|
||||
StrCpy $R0 -1
|
||||
IntOp $R0 $R0 + 1
|
||||
StrCpy $R2 $R3 1 $R0
|
||||
StrCmp $R2 "" +2
|
||||
StrCmp $R2 $R1 +2 -3
|
||||
|
||||
StrCpy $R0 -1
|
||||
|
||||
Pop $R3
|
||||
Pop $R2
|
||||
Pop $R1
|
||||
Exch $R0
|
||||
|
||||
Pop "${Var}"
|
||||
!macroend
|
||||
@@ -1,5 +0,0 @@
|
||||
; Allow title masquerading
|
||||
!define SetTitleBar "!insertmacro SetTitleBar"
|
||||
!macro SetTitlebar title
|
||||
SendMessage $HWNDPARENT ${WM_SETTEXT} 0 "STR:${title}"
|
||||
!macroend
|
||||
@@ -1,501 +0,0 @@
|
||||
#################################################################################
|
||||
# StdUtils plug-in for NSIS
|
||||
# Copyright (C) 2004-2018 LoRd_MuldeR <MuldeR2@GMX.de>
|
||||
#
|
||||
# This library is free software; you can redistribute it and/or
|
||||
# modify it under the terms of the GNU Lesser General Public
|
||||
# License as published by the Free Software Foundation; either
|
||||
# version 2.1 of the License, or (at your option) any later version.
|
||||
#
|
||||
# This library is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
# Lesser General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU Lesser General Public
|
||||
# License along with this library; if not, write to the Free Software
|
||||
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
#
|
||||
# http://www.gnu.org/licenses/lgpl-2.1.txt
|
||||
#################################################################################
|
||||
|
||||
# DEVELOPER NOTES:
|
||||
# - Please see "https://github.com/lordmulder/stdutils/" for news and updates!
|
||||
# - Please see "Docs\StdUtils\StdUtils.html" for detailed function descriptions!
|
||||
# - Please see "Examples\StdUtils\StdUtilsTest.nsi" for usage examples!
|
||||
|
||||
#################################################################################
|
||||
# FUNCTION DECLARTIONS
|
||||
#################################################################################
|
||||
|
||||
!ifndef ___STDUTILS__NSH___
|
||||
!define ___STDUTILS__NSH___
|
||||
|
||||
!define StdUtils.Time '!insertmacro _StdU_Time' #time(), as in C standard library
|
||||
!define StdUtils.GetMinutes '!insertmacro _StdU_GetMinutes' #GetSystemTimeAsFileTime(), returns the number of minutes
|
||||
!define StdUtils.GetHours '!insertmacro _StdU_GetHours' #GetSystemTimeAsFileTime(), returns the number of hours
|
||||
!define StdUtils.GetDays '!insertmacro _StdU_GetDays' #GetSystemTimeAsFileTime(), returns the number of days
|
||||
!define StdUtils.Rand '!insertmacro _StdU_Rand' #rand(), as in C standard library
|
||||
!define StdUtils.RandMax '!insertmacro _StdU_RandMax' #rand(), as in C standard library, with maximum value
|
||||
!define StdUtils.RandMinMax '!insertmacro _StdU_RandMinMax' #rand(), as in C standard library, with minimum/maximum value
|
||||
!define StdUtils.RandList '!insertmacro _StdU_RandList' #rand(), as in C standard library, with list support
|
||||
!define StdUtils.RandBytes '!insertmacro _StdU_RandBytes' #Generates random bytes, returned as Base64-encoded string
|
||||
!define StdUtils.FormatStr '!insertmacro _StdU_FormatStr' #sprintf(), as in C standard library, one '%d' placeholder
|
||||
!define StdUtils.FormatStr2 '!insertmacro _StdU_FormatStr2' #sprintf(), as in C standard library, two '%d' placeholders
|
||||
!define StdUtils.FormatStr3 '!insertmacro _StdU_FormatStr3' #sprintf(), as in C standard library, three '%d' placeholders
|
||||
!define StdUtils.ScanStr '!insertmacro _StdU_ScanStr' #sscanf(), as in C standard library, one '%d' placeholder
|
||||
!define StdUtils.ScanStr2 '!insertmacro _StdU_ScanStr2' #sscanf(), as in C standard library, two '%d' placeholders
|
||||
!define StdUtils.ScanStr3 '!insertmacro _StdU_ScanStr3' #sscanf(), as in C standard library, three '%d' placeholders
|
||||
!define StdUtils.TrimStr '!insertmacro _StdU_TrimStr' #Remove whitspaces from string, left and right
|
||||
!define StdUtils.TrimStrLeft '!insertmacro _StdU_TrimStrLeft' #Remove whitspaces from string, left side only
|
||||
!define StdUtils.TrimStrRight '!insertmacro _StdU_TrimStrRight' #Remove whitspaces from string, right side only
|
||||
!define StdUtils.RevStr '!insertmacro _StdU_RevStr' #Reverse a string, e.g. "reverse me" <-> "em esrever"
|
||||
!define StdUtils.ValidFileName '!insertmacro _StdU_ValidFileName' #Test whether string is a valid file name - no paths allowed
|
||||
!define StdUtils.ValidPathSpec '!insertmacro _StdU_ValidPathSpec' #Test whether string is a valid full(!) path specification
|
||||
!define StdUtils.ValidDomainName '!insertmacro _StdU_ValidDomain' #Test whether string is a valid host name or domain name
|
||||
!define StdUtils.StrToUtf8 '!insertmacro _StdU_StrToUtf8' #Convert string from Unicode (UTF-16) or ANSI to UTF-8 bytes
|
||||
!define StdUtils.StrFromUtf8 '!insertmacro _StdU_StrFromUtf8' #Convert string from UTF-8 bytes to Unicode (UTF-16) or ANSI
|
||||
!define StdUtils.SHFileMove '!insertmacro _StdU_SHFileMove' #SHFileOperation(), using the FO_MOVE operation
|
||||
!define StdUtils.SHFileCopy '!insertmacro _StdU_SHFileCopy' #SHFileOperation(), using the FO_COPY operation
|
||||
!define StdUtils.AppendToFile '!insertmacro _StdU_AppendToFile' #Append contents of an existing file to another file
|
||||
!define StdUtils.ExecShellAsUser '!insertmacro _StdU_ExecShlUser' #ShellExecute() as NON-elevated user from elevated installer
|
||||
!define StdUtils.InvokeShellVerb '!insertmacro _StdU_InvkeShlVrb' #Invokes a "shell verb", e.g. for pinning items to the taskbar
|
||||
!define StdUtils.ExecShellWaitEx '!insertmacro _StdU_ExecShlWaitEx' #ShellExecuteEx(), returns the handle of the new process
|
||||
!define StdUtils.WaitForProcEx '!insertmacro _StdU_WaitForProcEx' #WaitForSingleObject(), e.g. to wait for a running process
|
||||
!define StdUtils.GetParameter '!insertmacro _StdU_GetParameter' #Get the value of a specific command-line option
|
||||
!define StdUtils.TestParameter '!insertmacro _StdU_TestParameter' #Test whether a specific command-line option has been set
|
||||
!define StdUtils.ParameterCnt '!insertmacro _StdU_ParameterCnt' #Get number of command-line tokens, similar to argc in main()
|
||||
!define StdUtils.ParameterStr '!insertmacro _StdU_ParameterStr' #Get the n-th command-line token, similar to argv[i] in main()
|
||||
!define StdUtils.GetAllParameters '!insertmacro _StdU_GetAllParams' #Get complete command-line, but without executable name
|
||||
!define StdUtils.GetRealOSVersion '!insertmacro _StdU_GetRealOSVer' #Get the *real* Windows version number, even on Windows 8.1+
|
||||
!define StdUtils.GetRealOSBuildNo '!insertmacro _StdU_GetRealOSBld' #Get the *real* Windows build number, even on Windows 8.1+
|
||||
!define StdUtils.GetRealOSName '!insertmacro _StdU_GetRealOSStr' #Get the *real* Windows version, as a "friendly" name
|
||||
!define StdUtils.GetOSEdition '!insertmacro _StdU_GetOSEdition' #Get the Windows edition, i.e. "workstation" or "server"
|
||||
!define StdUtils.GetOSReleaseId '!insertmacro _StdU_GetOSRelIdNo' #Get the Windows release identifier (on Windows 10)
|
||||
!define StdUtils.GetOSReleaseName '!insertmacro _StdU_GetOSRelIdStr' #Get the Windows release (on Windows 10), as a "friendly" name
|
||||
!define StdUtils.VerifyOSVersion '!insertmacro _StdU_VrfyRealOSVer' #Compare *real* operating system to an expected version number
|
||||
!define StdUtils.VerifyOSBuildNo '!insertmacro _StdU_VrfyRealOSBld' #Compare *real* operating system to an expected build number
|
||||
!define StdUtils.HashText '!insertmacro _StdU_HashText' #Compute hash from text string (CRC32, MD5, SHA1/2/3, BLAKE2)
|
||||
!define StdUtils.HashFile '!insertmacro _StdU_HashFile' #Compute hash from file (CRC32, MD5, SHA1/2/3, BLAKE2)
|
||||
!define StdUtils.NormalizePath '!insertmacro _StdU_NormalizePath' #Simplifies the path to produce a direct, well-formed path
|
||||
!define StdUtils.GetParentPath '!insertmacro _StdU_GetParentPath' #Get parent path by removing the last component from the path
|
||||
!define StdUtils.SplitPath '!insertmacro _StdU_SplitPath' #Split the components of the given path
|
||||
!define StdUtils.GetDrivePart '!insertmacro _StdU_GetDrivePart' #Get drive component of path
|
||||
!define StdUtils.GetDirectoryPart '!insertmacro _StdU_GetDirPart' #Get directory component of path
|
||||
!define StdUtils.GetFileNamePart '!insertmacro _StdU_GetFNamePart' #Get file name component of path
|
||||
!define StdUtils.GetExtensionPart '!insertmacro _StdU_GetExtnPart' #Get file extension component of path
|
||||
!define StdUtils.TimerCreate '!insertmacro _StdU_TimerCreate' #Create a new event-timer that will be triggered periodically
|
||||
!define StdUtils.TimerDestroy '!insertmacro _StdU_TimerDestroy' #Destroy a running timer created with TimerCreate()
|
||||
!define StdUtils.ProtectStr '!insertmacro _StdU_PrtctStr' #Protect a given String using Windows' DPAPI
|
||||
!define StdUtils.UnprotectStr '!insertmacro _StdU_UnprtctStr' #Unprotect a string that was protected via ProtectStr()
|
||||
!define StdUtils.GetLibVersion '!insertmacro _StdU_GetLibVersion' #Get the current StdUtils library version (for debugging)
|
||||
!define StdUtils.SetVerbose '!insertmacro _StdU_SetVerbose' #Enable or disable "verbose" mode (for debugging)
|
||||
|
||||
|
||||
#################################################################################
|
||||
# MACRO DEFINITIONS
|
||||
#################################################################################
|
||||
|
||||
!macro _StdU_Time out
|
||||
StdUtils::Time /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetMinutes out
|
||||
StdUtils::GetMinutes /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetHours out
|
||||
StdUtils::GetHours /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetDays out
|
||||
StdUtils::GetDays /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_Rand out
|
||||
StdUtils::Rand /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_RandMax out max
|
||||
push ${max}
|
||||
StdUtils::RandMax /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_RandMinMax out min max
|
||||
push ${min}
|
||||
push ${max}
|
||||
StdUtils::RandMinMax /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_RandList count max
|
||||
push ${max}
|
||||
push ${count}
|
||||
StdUtils::RandList /NOUNLOAD
|
||||
!macroend
|
||||
|
||||
!macro _StdU_RandBytes out count
|
||||
push ${count}
|
||||
StdUtils::RandBytes /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_FormatStr out format val
|
||||
push `${format}`
|
||||
push ${val}
|
||||
StdUtils::FormatStr /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_FormatStr2 out format val1 val2
|
||||
push `${format}`
|
||||
push ${val1}
|
||||
push ${val2}
|
||||
StdUtils::FormatStr2 /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_FormatStr3 out format val1 val2 val3
|
||||
push `${format}`
|
||||
push ${val1}
|
||||
push ${val2}
|
||||
push ${val3}
|
||||
StdUtils::FormatStr3 /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ScanStr out format input default
|
||||
push `${format}`
|
||||
push `${input}`
|
||||
push ${default}
|
||||
StdUtils::ScanStr /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ScanStr2 out1 out2 format input default1 default2
|
||||
push `${format}`
|
||||
push `${input}`
|
||||
push ${default1}
|
||||
push ${default2}
|
||||
StdUtils::ScanStr2 /NOUNLOAD
|
||||
pop ${out1}
|
||||
pop ${out2}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ScanStr3 out1 out2 out3 format input default1 default2 default3
|
||||
push `${format}`
|
||||
push `${input}`
|
||||
push ${default1}
|
||||
push ${default2}
|
||||
push ${default3}
|
||||
StdUtils::ScanStr3 /NOUNLOAD
|
||||
pop ${out1}
|
||||
pop ${out2}
|
||||
pop ${out3}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_TrimStr var
|
||||
push ${var}
|
||||
StdUtils::TrimStr /NOUNLOAD
|
||||
pop ${var}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_TrimStrLeft var
|
||||
push ${var}
|
||||
StdUtils::TrimStrLeft /NOUNLOAD
|
||||
pop ${var}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_TrimStrRight var
|
||||
push ${var}
|
||||
StdUtils::TrimStrRight /NOUNLOAD
|
||||
pop ${var}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_RevStr var
|
||||
push ${var}
|
||||
StdUtils::RevStr /NOUNLOAD
|
||||
pop ${var}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ValidFileName out test
|
||||
push `${test}`
|
||||
StdUtils::ValidFileName /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ValidPathSpec out test
|
||||
push `${test}`
|
||||
StdUtils::ValidPathSpec /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ValidDomain out test
|
||||
push `${test}`
|
||||
StdUtils::ValidDomainName /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
|
||||
!macro _StdU_StrToUtf8 out str
|
||||
push `${str}`
|
||||
StdUtils::StrToUtf8 /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_StrFromUtf8 out trnc str
|
||||
push ${trnc}
|
||||
push `${str}`
|
||||
StdUtils::StrFromUtf8 /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_SHFileMove out from to hwnd
|
||||
push `${from}`
|
||||
push `${to}`
|
||||
push ${hwnd}
|
||||
StdUtils::SHFileMove /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_SHFileCopy out from to hwnd
|
||||
push `${from}`
|
||||
push `${to}`
|
||||
push ${hwnd}
|
||||
StdUtils::SHFileCopy /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_AppendToFile out from dest offset maxlen
|
||||
push `${from}`
|
||||
push `${dest}`
|
||||
push ${offset}
|
||||
push ${maxlen}
|
||||
StdUtils::AppendToFile /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ExecShlUser out file verb args
|
||||
push `${file}`
|
||||
push `${verb}`
|
||||
push `${args}`
|
||||
StdUtils::ExecShellAsUser /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_InvkeShlVrb out path file verb_id
|
||||
push "${path}"
|
||||
push "${file}"
|
||||
push ${verb_id}
|
||||
StdUtils::InvokeShellVerb /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ExecShlWaitEx out_res out_val file verb args
|
||||
push `${file}`
|
||||
push `${verb}`
|
||||
push `${args}`
|
||||
StdUtils::ExecShellWaitEx /NOUNLOAD
|
||||
pop ${out_res}
|
||||
pop ${out_val}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_WaitForProcEx out handle
|
||||
push `${handle}`
|
||||
StdUtils::WaitForProcEx /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetParameter out name default
|
||||
push `${name}`
|
||||
push `${default}`
|
||||
StdUtils::GetParameter /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_TestParameter out name
|
||||
push `${name}`
|
||||
StdUtils::TestParameter /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ParameterCnt out
|
||||
StdUtils::ParameterCnt /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_ParameterStr out index
|
||||
push ${index}
|
||||
StdUtils::ParameterStr /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetAllParams out truncate
|
||||
push `${truncate}`
|
||||
StdUtils::GetAllParameters /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetRealOSVer out_major out_minor out_spack
|
||||
StdUtils::GetRealOsVersion /NOUNLOAD
|
||||
pop ${out_major}
|
||||
pop ${out_minor}
|
||||
pop ${out_spack}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetRealOSBld out
|
||||
StdUtils::GetRealOsBuildNo /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetRealOSStr out
|
||||
StdUtils::GetRealOsName /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_VrfyRealOSVer out major minor spack
|
||||
push `${major}`
|
||||
push `${minor}`
|
||||
push `${spack}`
|
||||
StdUtils::VerifyRealOsVersion /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_VrfyRealOSBld out build
|
||||
push `${build}`
|
||||
StdUtils::VerifyRealOsBuildNo /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetOSEdition out
|
||||
StdUtils::GetOsEdition /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetOSRelIdNo out
|
||||
StdUtils::GetOsReleaseId /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetOSRelIdStr out
|
||||
StdUtils::GetOsReleaseName /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_HashText out type text
|
||||
push `${type}`
|
||||
push `${text}`
|
||||
StdUtils::HashText /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_HashFile out type file
|
||||
push `${type}`
|
||||
push `${file}`
|
||||
StdUtils::HashFile /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_NormalizePath out path
|
||||
push `${path}`
|
||||
StdUtils::NormalizePath /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetParentPath out path
|
||||
push `${path}`
|
||||
StdUtils::GetParentPath /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_SplitPath out_drive out_dir out_fname out_ext path
|
||||
push `${path}`
|
||||
StdUtils::SplitPath /NOUNLOAD
|
||||
pop ${out_drive}
|
||||
pop ${out_dir}
|
||||
pop ${out_fname}
|
||||
pop ${out_ext}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetDrivePart out path
|
||||
push `${path}`
|
||||
StdUtils::GetDrivePart /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetDirPart out path
|
||||
push `${path}`
|
||||
StdUtils::GetDirectoryPart /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetFNamePart out path
|
||||
push `${path}`
|
||||
StdUtils::GetFileNamePart /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetExtnPart out path
|
||||
push `${path}`
|
||||
StdUtils::GetExtensionPart /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_TimerCreate out callback interval
|
||||
GetFunctionAddress ${out} ${callback}
|
||||
push ${out}
|
||||
push ${interval}
|
||||
StdUtils::TimerCreate /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_TimerDestroy out timer_id
|
||||
push ${timer_id}
|
||||
StdUtils::TimerDestroy /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_PrtctStr out dpsc salt text
|
||||
push `${dpsc}`
|
||||
push `${salt}`
|
||||
push `${text}`
|
||||
StdUtils::ProtectStr /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_UnprtctStr out trnc salt data
|
||||
push `${trnc}`
|
||||
push `${salt}`
|
||||
push `${data}`
|
||||
StdUtils::UnprotectStr /NOUNLOAD
|
||||
pop ${out}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_GetLibVersion out_ver out_tst
|
||||
StdUtils::GetLibVersion /NOUNLOAD
|
||||
pop ${out_ver}
|
||||
pop ${out_tst}
|
||||
!macroend
|
||||
|
||||
!macro _StdU_SetVerbose enable
|
||||
Push ${enable}
|
||||
StdUtils::SetVerboseMode /NOUNLOAD
|
||||
!macroend
|
||||
|
||||
|
||||
#################################################################################
|
||||
# MAGIC NUMBERS
|
||||
#################################################################################
|
||||
|
||||
!define StdUtils.Const.ShellVerb.PinToTaskbar 0
|
||||
!define StdUtils.Const.ShellVerb.UnpinFromTaskbar 1
|
||||
!define StdUtils.Const.ShellVerb.PinToStart 2
|
||||
!define StdUtils.Const.ShellVerb.UnpinFromStart 3
|
||||
|
||||
!endif # !___STDUTILS__NSH___
|
||||
@@ -1,72 +0,0 @@
|
||||
!define StrLoc "!insertmacro StrLoc"
|
||||
|
||||
!macro StrLoc ResultVar String SubString StartPoint
|
||||
Push "${String}"
|
||||
Push "${SubString}"
|
||||
Push "${StartPoint}"
|
||||
Call StrLoc
|
||||
Pop "${ResultVar}"
|
||||
!macroend
|
||||
|
||||
Function StrLoc
|
||||
/*After this point:
|
||||
------------------------------------------
|
||||
$R0 = StartPoint (input)
|
||||
$R1 = SubString (input)
|
||||
$R2 = String (input)
|
||||
$R3 = SubStringLen (temp)
|
||||
$R4 = StrLen (temp)
|
||||
$R5 = StartCharPos (temp)
|
||||
$R6 = TempStr (temp)*/
|
||||
|
||||
;Get input from user
|
||||
Exch $R0
|
||||
Exch
|
||||
Exch $R1
|
||||
Exch 2
|
||||
Exch $R2
|
||||
Push $R3
|
||||
Push $R4
|
||||
Push $R5
|
||||
Push $R6
|
||||
|
||||
;Get "String" and "SubString" length
|
||||
StrLen $R3 $R1
|
||||
StrLen $R4 $R2
|
||||
;Start "StartCharPos" counter
|
||||
StrCpy $R5 0
|
||||
|
||||
;Loop until "SubString" is found or "String" reaches its end
|
||||
${Do}
|
||||
;Remove everything before and after the searched part ("TempStr")
|
||||
StrCpy $R6 $R2 $R3 $R5
|
||||
|
||||
;Compare "TempStr" with "SubString"
|
||||
${If} $R6 == $R1
|
||||
${If} $R0 == `<`
|
||||
IntOp $R6 $R3 + $R5
|
||||
IntOp $R0 $R4 - $R6
|
||||
${Else}
|
||||
StrCpy $R0 $R5
|
||||
${EndIf}
|
||||
${ExitDo}
|
||||
${EndIf}
|
||||
;If not "SubString", this could be "String"'s end
|
||||
${If} $R5 >= $R4
|
||||
StrCpy $R0 ``
|
||||
${ExitDo}
|
||||
${EndIf}
|
||||
;If not, continue the loop
|
||||
IntOp $R5 $R5 + 1
|
||||
${Loop}
|
||||
|
||||
;Return output to user
|
||||
Pop $R6
|
||||
Pop $R5
|
||||
Pop $R4
|
||||
Pop $R3
|
||||
Pop $R2
|
||||
Exch
|
||||
Pop $R1
|
||||
Exch $R0
|
||||
FunctionEnd
|
||||
@@ -1,66 +0,0 @@
|
||||
!define StrRep "!insertmacro StrRep"
|
||||
!macro StrRep output string old new
|
||||
Push `${string}`
|
||||
Push `${old}`
|
||||
Push `${new}`
|
||||
;!ifdef __UNINSTALL__
|
||||
; Call un.StrRep
|
||||
;!else
|
||||
Call StrRep
|
||||
;!endif
|
||||
Pop ${output}
|
||||
!macroend
|
||||
|
||||
!macro Func_StrRep un
|
||||
Function ${un}StrRep
|
||||
Exch $R2 ;new
|
||||
Exch 1
|
||||
Exch $R1 ;old
|
||||
Exch 2
|
||||
Exch $R0 ;string
|
||||
Push $R3
|
||||
Push $R4
|
||||
Push $R5
|
||||
Push $R6
|
||||
Push $R7
|
||||
Push $R8
|
||||
Push $R9
|
||||
|
||||
StrCpy $R3 0
|
||||
StrLen $R4 $R1
|
||||
StrLen $R6 $R0
|
||||
StrLen $R9 $R2
|
||||
loop:
|
||||
StrCpy $R5 $R0 $R4 $R3
|
||||
StrCmp $R5 $R1 found
|
||||
StrCmp $R3 $R6 done
|
||||
IntOp $R3 $R3 + 1 ;move offset by 1 to check the next character
|
||||
Goto loop
|
||||
found:
|
||||
StrCpy $R5 $R0 $R3
|
||||
IntOp $R8 $R3 + $R4
|
||||
StrCpy $R7 $R0 "" $R8
|
||||
StrCpy $R0 $R5$R2$R7
|
||||
StrLen $R6 $R0
|
||||
IntOp $R3 $R3 + $R9 ;move offset by length of the replacement string
|
||||
Goto loop
|
||||
done:
|
||||
|
||||
Pop $R9
|
||||
Pop $R8
|
||||
Pop $R7
|
||||
Pop $R6
|
||||
Pop $R5
|
||||
Pop $R4
|
||||
Pop $R3
|
||||
Push $R0
|
||||
Push $R1
|
||||
Pop $R0
|
||||
Pop $R1
|
||||
Pop $R0
|
||||
Pop $R2
|
||||
Exch $R1
|
||||
FunctionEnd
|
||||
!macroend
|
||||
!insertmacro Func_StrRep ""
|
||||
;!insertmacro Func_StrRep "un."
|
||||
@@ -1,150 +0,0 @@
|
||||
!define StrTok "!insertmacro StrTok"
|
||||
|
||||
!macro StrTok ResultVar String Separators ResultPart SkipEmptyParts
|
||||
Push "${String}"
|
||||
Push "${Separators}"
|
||||
Push "${ResultPart}"
|
||||
Push "${SkipEmptyParts}"
|
||||
Call StrTok
|
||||
Pop "${ResultVar}"
|
||||
!macroend
|
||||
|
||||
Function StrTok
|
||||
/*After this point:
|
||||
------------------------------------------
|
||||
$0 = SkipEmptyParts (input)
|
||||
$1 = ResultPart (input)
|
||||
$2 = Separators (input)
|
||||
$3 = String (input)
|
||||
$4 = SeparatorsLen (temp)
|
||||
$5 = StrLen (temp)
|
||||
$6 = StartCharPos (temp)
|
||||
$7 = TempStr (temp)
|
||||
$8 = CurrentLoop
|
||||
$9 = CurrentSepChar
|
||||
$R0 = CurrentSepCharNum
|
||||
*/
|
||||
|
||||
;Get input from user
|
||||
Exch $0
|
||||
Exch
|
||||
Exch $1
|
||||
Exch
|
||||
Exch 2
|
||||
Exch $2
|
||||
Exch 2
|
||||
Exch 3
|
||||
Exch $3
|
||||
Exch 3
|
||||
Push $4
|
||||
Push $5
|
||||
Push $6
|
||||
Push $7
|
||||
Push $8
|
||||
Push $9
|
||||
Push $R0
|
||||
|
||||
;Parameter defaults
|
||||
${IfThen} $2 == `` ${|} StrCpy $2 `|` ${|}
|
||||
${IfThen} $1 == `` ${|} StrCpy $1 `L` ${|}
|
||||
${IfThen} $0 == `` ${|} StrCpy $0 `0` ${|}
|
||||
|
||||
;Get "String" and "Separators" length
|
||||
StrLen $4 $2
|
||||
StrLen $5 $3
|
||||
;Start "StartCharPos" and "ResultPart" counters
|
||||
StrCpy $6 0
|
||||
StrCpy $8 -1
|
||||
|
||||
;Loop until "ResultPart" is met, "Separators" is found or
|
||||
;"String" reaches its end
|
||||
ResultPartLoop: ;"CurrentLoop" Loop
|
||||
|
||||
;Increase "CurrentLoop" counter
|
||||
IntOp $8 $8 + 1
|
||||
|
||||
StrSearchLoop:
|
||||
${Do} ;"String" Loop
|
||||
;Remove everything before and after the searched part ("TempStr")
|
||||
StrCpy $7 $3 1 $6
|
||||
|
||||
;Verify if it's the "String" end
|
||||
${If} $6 >= $5
|
||||
;If "CurrentLoop" is what the user wants, remove the part
|
||||
;after "TempStr" and itself and get out of here
|
||||
${If} $8 == $1
|
||||
${OrIf} $1 == `L`
|
||||
StrCpy $3 $3 $6
|
||||
${Else} ;If not, empty "String" and get out of here
|
||||
StrCpy $3 ``
|
||||
${EndIf}
|
||||
StrCpy $R0 `End`
|
||||
${ExitDo}
|
||||
${EndIf}
|
||||
|
||||
;Start "CurrentSepCharNum" counter (for "Separators" Loop)
|
||||
StrCpy $R0 0
|
||||
|
||||
${Do} ;"Separators" Loop
|
||||
;Use one "Separators" character at a time
|
||||
${If} $R0 <> 0
|
||||
StrCpy $9 $2 1 $R0
|
||||
${Else}
|
||||
StrCpy $9 $2 1
|
||||
${EndIf}
|
||||
|
||||
;Go to the next "String" char if it's "Separators" end
|
||||
${IfThen} $R0 >= $4 ${|} ${ExitDo} ${|}
|
||||
|
||||
;Or, if "TempStr" equals "CurrentSepChar", then...
|
||||
${If} $7 == $9
|
||||
StrCpy $7 $3 $6
|
||||
|
||||
;If "String" is empty because this result part doesn't
|
||||
;contain data, verify if "SkipEmptyParts" is activated,
|
||||
;so we don't return the output to user yet
|
||||
|
||||
${If} $7 == ``
|
||||
${AndIf} $0 = 1 ;${TRUE}
|
||||
IntOp $6 $6 + 1
|
||||
StrCpy $3 $3 `` $6
|
||||
StrCpy $6 0
|
||||
Goto StrSearchLoop
|
||||
${ElseIf} $8 == $1
|
||||
StrCpy $3 $3 $6
|
||||
StrCpy $R0 "End"
|
||||
${ExitDo}
|
||||
${EndIf} ;If not, go to the next result part
|
||||
IntOp $6 $6 + 1
|
||||
StrCpy $3 $3 `` $6
|
||||
StrCpy $6 0
|
||||
Goto ResultPartLoop
|
||||
${EndIf}
|
||||
|
||||
;Increase "CurrentSepCharNum" counter
|
||||
IntOp $R0 $R0 + 1
|
||||
${Loop}
|
||||
${IfThen} $R0 == "End" ${|} ${ExitDo} ${|}
|
||||
|
||||
;Increase "StartCharPos" counter
|
||||
IntOp $6 $6 + 1
|
||||
${Loop}
|
||||
|
||||
/*After this point:
|
||||
------------------------------------------
|
||||
$3 = ResultVar (output)*/
|
||||
|
||||
;Return output to user
|
||||
|
||||
Pop $R0
|
||||
Pop $9
|
||||
Pop $8
|
||||
Pop $7
|
||||
Pop $6
|
||||
Pop $5
|
||||
Pop $4
|
||||
Pop $0
|
||||
Pop $1
|
||||
Pop $2
|
||||
Exch $3
|
||||
FunctionEnd
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
Before Width: | Height: | Size: 420 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user