initial commit
This commit is contained in:
57
CHANGELOG.md
Normal file
57
CHANGELOG.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Changelog
|
||||
|
||||
## [1.3.0] - Sécurité
|
||||
- Ajout d'une page de login (identifiant + mot de passe)
|
||||
- Session Flask sécurisée (HttpOnly, SameSite Lax, durée configurable)
|
||||
- Protection timing attack via `hmac.compare_digest`
|
||||
- Toutes les routes API protégées par `@login_required`
|
||||
- Bouton Déconnexion dans le header
|
||||
- Credentials configurables dans le `docker-compose.yml` (`APP_USERNAME`, `APP_PASSWORD`, `SECRET_KEY`)
|
||||
|
||||
## [1.2.0] - Pause et Stop
|
||||
- Bouton ⏸ Pause — suspend le transfert entre deux chunks
|
||||
- Bouton ⏹ Stop — annule le transfert et supprime le fichier partiel
|
||||
- Les boutons sont grisés quand aucun transfert n'est actif
|
||||
- État pause/stop correctement restauré à la reconnexion
|
||||
|
||||
## [1.1.0] - Cache et performance
|
||||
- Cache serveur avec TTL configurable (`CACHE_TTL`, défaut 60s)
|
||||
- Prefetch automatique des 5 premiers sous-dossiers en arrière-plan
|
||||
- Invalidation du cache après mkdir, rename et fin de transfert
|
||||
- Bouton ↻ Actualiser force un rechargement en bypass du cache
|
||||
|
||||
## [1.0.2] - Corrections
|
||||
- Fix : double ajout en queue lors d'un transfert (push local supprimé, sync via serveur uniquement)
|
||||
- Fix : `transfer_stop` / `transfer_pause` non déclarés en `global` dans `queue_worker`, coupant tous les transferts suivants après un premier Stop
|
||||
- Fix : IndentationError sur `transfer_thread = None` introduite par un patch sed
|
||||
|
||||
## [1.0.1] - Reconnexion et progression
|
||||
- Synchronisation de la file d'attente au rechargement de la page via `/api/queue`
|
||||
- Le transfert en cours et son pourcentage sont restaurés à la reconnexion
|
||||
- `current_percent` suivi côté serveur pour être exposé à la reconnexion
|
||||
|
||||
## [1.0.0] - Version initiale
|
||||
- Interface dual-pane Seedbox / NAS
|
||||
- File d'attente séquentielle avec progression WebSocket en temps réel
|
||||
- Navigation par double-clic avec breadcrumb
|
||||
- Sélection multiple côté seedbox
|
||||
- Création de dossiers sur les deux panneaux
|
||||
- Renommage côté NAS uniquement
|
||||
- Seedbox montée en lecture seule (`:ro`)
|
||||
- Serveur Flask + Eventlet pour les requêtes concurrentes pendant les transferts
|
||||
- Compatible smartphone
|
||||
|
||||
## [2.0.0] - Architecture plugin
|
||||
- Refonte complète en architecture plugin pour les systèmes de fichiers
|
||||
- Interface `AbstractFS` dans `plugins/base.py` — tout plugin hérite de cette classe
|
||||
- Plugin `LocalFS` (`plugins/local.py`) — remplace le code direct `os.*` précédent
|
||||
- Plugin `SFTPfs` (`plugins/sftp.py`) — accès SFTP via Paramiko avec prefetch
|
||||
- Registre auto-découverte (`plugins/__init__.py`) — tout fichier dans `plugins/` est chargé automatiquement
|
||||
- Ajouter un nouveau protocole = créer un seul fichier `plugins/monprotocole.py`
|
||||
- Gestion des connexions dans l'interface : ajout, suppression, test de connexion
|
||||
- Chaque panneau peut pointer vers n'importe quelle connexion configurée
|
||||
- Les connexions sont persistées dans `data/connections.json` (volume Docker)
|
||||
- Connexions par défaut (Seedbox/NAS locaux) créées automatiquement depuis les variables d'environnement
|
||||
- Moteur de copie universel : fonctionne entre n'importe quels deux FS (Local→SFTP, SFTP→Local, Local→Local…)
|
||||
- Ajout de `paramiko` dans les dépendances
|
||||
- Nouveau volume `./data:/app/data` pour la persistance des connexions
|
||||
12
Dockerfile
Normal file
12
Dockerfile
Normal file
@@ -0,0 +1,12 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
EXPOSE 5000
|
||||
|
||||
CMD ["python", "app.py"]
|
||||
688
app.py
Normal file
688
app.py
Normal file
@@ -0,0 +1,688 @@
|
||||
import eventlet
|
||||
eventlet.monkey_patch()
|
||||
|
||||
import os
|
||||
import ctypes
|
||||
import json
|
||||
import uuid
|
||||
import threading
|
||||
import time
|
||||
import hmac
|
||||
from functools import wraps
|
||||
from flask import Flask, render_template, request, jsonify, session, redirect, url_for
|
||||
from flask_socketio import SocketIO
|
||||
|
||||
import plugins as plugin_registry
|
||||
|
||||
app = Flask(__name__)
|
||||
app.config['SECRET_KEY'] = os.environ.get('SECRET_KEY', 'seedmover-secret')
|
||||
app.config['SESSION_COOKIE_HTTPONLY'] = True
|
||||
app.config['SESSION_COOKIE_SAMESITE'] = 'Lax'
|
||||
app.config['PERMANENT_SESSION_LIFETIME'] = int(os.environ.get('SESSION_LIFETIME', 86400))
|
||||
socketio = SocketIO(app, cors_allowed_origins="*", async_mode='eventlet')
|
||||
|
||||
SEEDBOX_PATH = os.environ.get('SEEDBOX_PATH', '/mnt/seedbox')
|
||||
NAS_PATH = os.environ.get('NAS_PATH', '/mnt/nas')
|
||||
APP_TITLE = os.environ.get('APP_TITLE', 'SeedMover')
|
||||
APP_USERNAME = os.environ.get('APP_USERNAME', 'admin')
|
||||
APP_PASSWORD = os.environ.get('APP_PASSWORD', 'changeme')
|
||||
CACHE_TTL = int(os.environ.get('CACHE_TTL', 60))
|
||||
DATA_DIR = os.environ.get('DATA_DIR', '/app/data')
|
||||
CONNECTIONS_FILE = os.path.join(DATA_DIR, 'connections.json')
|
||||
|
||||
os.makedirs(DATA_DIR, exist_ok=True)
|
||||
|
||||
HISTORY_FILE = os.path.join(DATA_DIR, 'history.json')
|
||||
HISTORY_MAX = int(os.environ.get('HISTORY_MAX', 50))
|
||||
history_lock = threading.Lock()
|
||||
|
||||
def _load_history():
|
||||
try:
|
||||
with open(HISTORY_FILE) as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
def _save_history(entries):
|
||||
with open(HISTORY_FILE, 'w') as f:
|
||||
json.dump(entries[:HISTORY_MAX], f, indent=2)
|
||||
|
||||
def _add_history(entry):
|
||||
with history_lock:
|
||||
entries = _load_history()
|
||||
entries.insert(0, entry)
|
||||
_save_history(entries)
|
||||
|
||||
# ─── Connexions persistées ────────────────────────────────────────────────────
|
||||
connections_lock = threading.Lock()
|
||||
|
||||
def _load_connections():
|
||||
try:
|
||||
with open(CONNECTIONS_FILE) as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
def _save_connections(conns):
|
||||
with open(CONNECTIONS_FILE, 'w') as f:
|
||||
json.dump(conns, f, indent=2)
|
||||
|
||||
def _ensure_default_connections():
|
||||
conns = _load_connections()
|
||||
changed = False
|
||||
if 'seedbox' not in conns:
|
||||
conns['seedbox'] = {
|
||||
'id': 'seedbox', 'name': 'Seedbox', 'type': 'local',
|
||||
'readonly': True, 'config': {'root_path': SEEDBOX_PATH}
|
||||
}
|
||||
changed = True
|
||||
if 'nas' not in conns:
|
||||
conns['nas'] = {
|
||||
'id': 'nas', 'name': 'NAS', 'type': 'local',
|
||||
'readonly': False, 'config': {'root_path': NAS_PATH}
|
||||
}
|
||||
changed = True
|
||||
if changed:
|
||||
_save_connections(conns)
|
||||
|
||||
_ensure_default_connections()
|
||||
|
||||
# ─── Pool de connexions FS ────────────────────────────────────────────────────
|
||||
fs_pool = {} # connection_id -> instance FS
|
||||
fs_pool_lock = threading.Lock()
|
||||
fs_conn_locks = {} # connection_id -> RLock (sérialise les accès SFTP)
|
||||
|
||||
|
||||
def _get_conn_lock(connection_id):
|
||||
"""Retourne un verrou dédié à cette connexion (crée si absent)."""
|
||||
with fs_pool_lock:
|
||||
if connection_id not in fs_conn_locks:
|
||||
fs_conn_locks[connection_id] = threading.RLock()
|
||||
return fs_conn_locks[connection_id]
|
||||
|
||||
|
||||
def get_fs(connection_id):
|
||||
conns = _load_connections()
|
||||
conn_def = conns.get(connection_id)
|
||||
if not conn_def:
|
||||
raise ValueError(f"Connexion inconnue : {connection_id}")
|
||||
with fs_pool_lock:
|
||||
fs = fs_pool.get(connection_id)
|
||||
if fs and fs.is_connected():
|
||||
return fs, conn_def
|
||||
cls = plugin_registry.get_plugin(conn_def['type'])
|
||||
if not cls:
|
||||
raise ValueError(f"Plugin inconnu : {conn_def['type']}")
|
||||
fs = cls()
|
||||
fs.connect(conn_def['config'])
|
||||
fs_pool[connection_id] = fs
|
||||
return fs, conn_def
|
||||
|
||||
def invalidate_fs(connection_id):
|
||||
with fs_pool_lock:
|
||||
fs_pool.pop(connection_id, None)
|
||||
|
||||
# ─── Cache ────────────────────────────────────────────────────────────────────
|
||||
dir_cache = {}
|
||||
dir_cache_lock = threading.Lock()
|
||||
|
||||
def cache_get(key):
|
||||
with dir_cache_lock:
|
||||
entry = dir_cache.get(key)
|
||||
if entry and (time.time() - entry['ts']) < CACHE_TTL:
|
||||
return entry['data']
|
||||
return None
|
||||
|
||||
def cache_set(key, data):
|
||||
with dir_cache_lock:
|
||||
dir_cache[key] = {'data': data, 'ts': time.time()}
|
||||
|
||||
def cache_invalidate(connection_id, path):
|
||||
import posixpath, os.path as osp
|
||||
try:
|
||||
parent = posixpath.dirname(path) or osp.dirname(path)
|
||||
except Exception:
|
||||
parent = path
|
||||
with dir_cache_lock:
|
||||
for k in list(dir_cache.keys()):
|
||||
if k in (f"{connection_id}:{path}", f"{connection_id}:{parent}"):
|
||||
dir_cache.pop(k, None)
|
||||
|
||||
# ─── File de transfert ───────────────────────────────────────────────────────
|
||||
transfer_queue = []
|
||||
transfer_lock = threading.Lock()
|
||||
transfer_thread = None
|
||||
current_transfer = None
|
||||
current_percent = 0
|
||||
transfer_stop = False
|
||||
transfer_pause = False
|
||||
|
||||
|
||||
class TransferStopped(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def _trim_memory():
|
||||
"""Force glibc à rendre la mémoire inutilisée au système après un transfert."""
|
||||
import gc
|
||||
gc.collect()
|
||||
try:
|
||||
ctypes.cdll.LoadLibrary("libc.so.6").malloc_trim(0)
|
||||
except Exception:
|
||||
pass # Non-Linux, on ignore
|
||||
|
||||
|
||||
def format_size(size):
|
||||
for unit in ['B', 'KB', 'MB', 'GB', 'TB']:
|
||||
if size < 1024:
|
||||
return f"{size:.1f} {unit}"
|
||||
size /= 1024
|
||||
return f"{size:.1f} PB"
|
||||
|
||||
|
||||
def copy_between_fs(src_fs, src_path, dst_fs, dst_path, transfer_id):
|
||||
global current_percent, transfer_stop, transfer_pause
|
||||
total = src_fs.get_total_size(src_path)
|
||||
copied = [0]
|
||||
|
||||
def open_dst(d_path):
|
||||
"""Ouvre le fichier destination en écriture, retourne un handle."""
|
||||
parent = dst_fs.dirname(d_path)
|
||||
if parent:
|
||||
dst_fs.mkdir(parent)
|
||||
try:
|
||||
# SFTP — garder le handle ouvert tout le transfert
|
||||
return dst_fs._sftp.open(d_path, 'wb')
|
||||
except AttributeError:
|
||||
# Local
|
||||
return open(d_path, 'wb')
|
||||
|
||||
def stream_file(s_path, d_path):
|
||||
"""Copie chunk par chunk sans jamais accumuler en mémoire."""
|
||||
handle = open_dst(d_path)
|
||||
try:
|
||||
for chunk in src_fs.read_chunks(s_path, chunk_size=1024*1024):
|
||||
while transfer_pause and not transfer_stop:
|
||||
eventlet.sleep(0.2)
|
||||
if transfer_stop:
|
||||
raise TransferStopped()
|
||||
handle.write(chunk) # écriture directe, pas de buffer
|
||||
copied[0] += len(chunk)
|
||||
pct = int(copied[0] / total * 100) if total > 0 else 100
|
||||
current_percent = pct
|
||||
eventlet.sleep(0) # céder la main à eventlet
|
||||
socketio.emit('transfer_progress', {
|
||||
'id': transfer_id, 'percent': pct,
|
||||
'copied': copied[0], 'total': total,
|
||||
'copied_fmt': format_size(copied[0]),
|
||||
'total_fmt': format_size(total)
|
||||
})
|
||||
finally:
|
||||
handle.close()
|
||||
|
||||
try:
|
||||
if src_fs.isdir(src_path):
|
||||
name = src_fs.basename(src_path)
|
||||
dst_base = dst_fs.join(dst_path, name)
|
||||
dst_fs.mkdir(dst_base)
|
||||
for root, dirs, files in src_fs.walk(src_path):
|
||||
rel = src_fs.relpath(root, src_path)
|
||||
dst_root = dst_fs.join(dst_base, rel) if rel and rel != '.' else dst_base
|
||||
dst_fs.mkdir(dst_root)
|
||||
for fname in files:
|
||||
stream_file(src_fs.join(root, fname), dst_fs.join(dst_root, fname))
|
||||
else:
|
||||
stream_file(src_path, dst_fs.join(dst_path, src_fs.basename(src_path)))
|
||||
|
||||
socketio.emit('transfer_done', {'id': transfer_id, 'success': True})
|
||||
|
||||
except TransferStopped:
|
||||
try:
|
||||
p = dst_fs.join(dst_path, src_fs.basename(src_path))
|
||||
if dst_fs.exists(p):
|
||||
dst_fs.remove(p)
|
||||
except Exception:
|
||||
pass
|
||||
socketio.emit('transfer_done', {
|
||||
'id': transfer_id, 'success': False,
|
||||
'error': "Arrêté par l'utilisateur"
|
||||
})
|
||||
except Exception as e:
|
||||
socketio.emit('transfer_done', {'id': transfer_id, 'success': False, 'error': str(e)})
|
||||
|
||||
|
||||
def queue_worker():
|
||||
global transfer_queue, current_transfer, transfer_thread
|
||||
global transfer_stop, transfer_pause, current_percent
|
||||
while True:
|
||||
with transfer_lock:
|
||||
if not transfer_queue:
|
||||
current_transfer = None
|
||||
current_percent = 0
|
||||
transfer_thread = None
|
||||
return
|
||||
transfer = transfer_queue.pop(0)
|
||||
current_transfer = transfer
|
||||
transfer_stop = False
|
||||
transfer_pause = False
|
||||
socketio.emit('transfer_started', {'id': transfer['id'], 'name': transfer['name']})
|
||||
try:
|
||||
src_fs, _ = get_fs(transfer['src_connection'])
|
||||
dst_fs, _ = get_fs(transfer['dst_connection'])
|
||||
src_lock = _get_conn_lock(transfer['src_connection'])
|
||||
dst_lock = _get_conn_lock(transfer['dst_connection'])
|
||||
# Acquérir les deux verrous dans un ordre fixe pour éviter le deadlock
|
||||
locks = sorted([
|
||||
(transfer['src_connection'], src_lock),
|
||||
(transfer['dst_connection'], dst_lock)
|
||||
], key=lambda x: x[0])
|
||||
t_start = time.time()
|
||||
with locks[0][1]:
|
||||
with locks[1][1]:
|
||||
copy_between_fs(src_fs, transfer['src'], dst_fs, transfer['dst'], transfer['id'])
|
||||
t_end = time.time()
|
||||
cache_invalidate(transfer['dst_connection'], transfer['dst'])
|
||||
# Historique — on récupère la taille réelle si possible
|
||||
try:
|
||||
fsize = src_fs.get_total_size(transfer['src'])
|
||||
except Exception:
|
||||
fsize = 0
|
||||
_add_history({
|
||||
'name': transfer['name'],
|
||||
'src': transfer['src'],
|
||||
'dst': transfer['dst'],
|
||||
'src_connection': transfer['src_connection'],
|
||||
'dst_connection': transfer['dst_connection'],
|
||||
'size': fsize,
|
||||
'duration': round(t_end - t_start, 1),
|
||||
'date': time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(t_end)),
|
||||
'success': True
|
||||
})
|
||||
except Exception as e:
|
||||
_add_history({
|
||||
'name': transfer.get('name', ''),
|
||||
'src': transfer.get('src', ''),
|
||||
'dst': transfer.get('dst', ''),
|
||||
'src_connection': transfer.get('src_connection', ''),
|
||||
'dst_connection': transfer.get('dst_connection', ''),
|
||||
'size': 0,
|
||||
'duration': 0,
|
||||
'date': time.strftime('%Y-%m-%d %H:%M:%S'),
|
||||
'success': False,
|
||||
'error': str(e)
|
||||
})
|
||||
socketio.emit('transfer_done', {'id': transfer['id'], 'success': False, 'error': str(e)})
|
||||
with transfer_lock:
|
||||
current_transfer = None
|
||||
current_percent = 0
|
||||
_trim_memory()
|
||||
time.sleep(0.1)
|
||||
|
||||
|
||||
# ─── Auth ─────────────────────────────────────────────────────────────────────
|
||||
def check_password(password):
|
||||
return hmac.compare_digest(password, APP_PASSWORD)
|
||||
|
||||
def login_required(f):
|
||||
@wraps(f)
|
||||
def decorated(*args, **kwargs):
|
||||
if not session.get('logged_in'):
|
||||
if request.is_json or request.path.startswith('/api/'):
|
||||
return jsonify({'error': 'Non authentifié'}), 401
|
||||
return redirect(url_for('login', next=request.path))
|
||||
return f(*args, **kwargs)
|
||||
return decorated
|
||||
|
||||
# ─── Routes Auth ──────────────────────────────────────────────────────────────
|
||||
@app.route('/login', methods=['GET', 'POST'])
|
||||
def login():
|
||||
error = None
|
||||
if request.method == 'POST':
|
||||
username = request.form.get('username', '').strip()
|
||||
password = request.form.get('password', '')
|
||||
if username == APP_USERNAME and check_password(password):
|
||||
session.permanent = True
|
||||
session['logged_in'] = True
|
||||
next_url = request.args.get('next', '/')
|
||||
if not next_url.startswith('/'):
|
||||
next_url = '/'
|
||||
return redirect(next_url)
|
||||
error = 'Identifiants incorrects'
|
||||
return render_template('login.html', title=APP_TITLE, error=error)
|
||||
|
||||
@app.route('/logout')
|
||||
def logout():
|
||||
session.clear()
|
||||
return redirect(url_for('login'))
|
||||
|
||||
@app.route('/')
|
||||
@login_required
|
||||
def index():
|
||||
return render_template('index.html',
|
||||
title=APP_TITLE,
|
||||
plugins=plugin_registry.list_plugins())
|
||||
|
||||
# ─── API Connexions ───────────────────────────────────────────────────────────
|
||||
@app.route('/api/connections')
|
||||
@login_required
|
||||
def list_connections():
|
||||
conns = _load_connections()
|
||||
safe = []
|
||||
for c in conns.values():
|
||||
if not isinstance(c, dict) or 'type' not in c:
|
||||
continue # ignorer les clés internes comme __defaults__
|
||||
sc = {k: v for k, v in c.items() if k != 'config'}
|
||||
cfg_safe = {}
|
||||
for k, v in c.get('config', {}).items():
|
||||
cfg_safe[k] = '***' if any(x in k.lower() for x in ['password', 'key']) else v
|
||||
sc['config'] = cfg_safe
|
||||
cls = plugin_registry.get_plugin(c['type'])
|
||||
sc['type_label'] = cls.PLUGIN_LABEL if cls else c['type']
|
||||
safe.append(sc)
|
||||
return jsonify(safe)
|
||||
|
||||
@app.route('/api/connections', methods=['POST'])
|
||||
@login_required
|
||||
def add_connection():
|
||||
data = request.json
|
||||
conn_type = data.get('type')
|
||||
name = data.get('name', '').strip()
|
||||
config = data.get('config', {})
|
||||
if not conn_type or not name:
|
||||
return jsonify({'error': 'type et name requis'}), 400
|
||||
cls = plugin_registry.get_plugin(conn_type)
|
||||
if not cls:
|
||||
return jsonify({'error': f'Plugin inconnu : {conn_type}'}), 400
|
||||
try:
|
||||
fs = cls()
|
||||
fs.connect(config)
|
||||
fs.list(config.get('root_path', '/'))
|
||||
fs.disconnect()
|
||||
except Exception as e:
|
||||
return jsonify({'error': f'Connexion échouée : {e}'}), 400
|
||||
conn_id = str(uuid.uuid4())[:8]
|
||||
conn = {'id': conn_id, 'name': name, 'type': conn_type,
|
||||
'readonly': data.get('readonly', False), 'config': config}
|
||||
with connections_lock:
|
||||
conns = _load_connections()
|
||||
conns[conn_id] = conn
|
||||
_save_connections(conns)
|
||||
return jsonify({'success': True, 'id': conn_id})
|
||||
|
||||
@app.route('/api/connections/<conn_id>', methods=['PUT'])
|
||||
@login_required
|
||||
def update_connection(conn_id):
|
||||
data = request.json
|
||||
name = data.get('name', '').strip()
|
||||
config = data.get('config', {})
|
||||
if not name:
|
||||
return jsonify({'error': 'name requis'}), 400
|
||||
with connections_lock:
|
||||
conns = _load_connections()
|
||||
if conn_id not in conns:
|
||||
return jsonify({'error': 'Connexion introuvable'}), 404
|
||||
conn = conns[conn_id]
|
||||
conn_type = conn['type']
|
||||
# Fusionner la config : garder les valeurs masquées (***) de l'ancienne config
|
||||
merged_config = dict(conn.get('config', {}))
|
||||
for k, v in config.items():
|
||||
if v != '***':
|
||||
merged_config[k] = v
|
||||
# Tester la connexion avec la nouvelle config
|
||||
cls = plugin_registry.get_plugin(conn_type)
|
||||
try:
|
||||
fs = cls()
|
||||
fs.connect(merged_config)
|
||||
fs.list(merged_config.get('root_path', '/'))
|
||||
fs.disconnect()
|
||||
except Exception as e:
|
||||
return jsonify({'error': f'Connexion échouée : {e}'}), 400
|
||||
conn['name'] = name
|
||||
conn['config'] = merged_config
|
||||
conn['readonly'] = data.get('readonly', conn.get('readonly', False))
|
||||
conns[conn_id] = conn
|
||||
_save_connections(conns)
|
||||
invalidate_fs(conn_id)
|
||||
return jsonify({'success': True})
|
||||
|
||||
|
||||
@app.route('/api/connections/<conn_id>', methods=['DELETE'])
|
||||
@login_required
|
||||
def delete_connection(conn_id):
|
||||
if conn_id in ('seedbox', 'nas'):
|
||||
return jsonify({'error': 'Impossible de supprimer les connexions par défaut'}), 400
|
||||
with connections_lock:
|
||||
conns = _load_connections()
|
||||
conns.pop(conn_id, None)
|
||||
_save_connections(conns)
|
||||
invalidate_fs(conn_id)
|
||||
return jsonify({'success': True})
|
||||
|
||||
@app.route('/api/connections/<conn_id>/test', methods=['POST'])
|
||||
@login_required
|
||||
def test_connection(conn_id):
|
||||
try:
|
||||
fs, conn_def = get_fs(conn_id)
|
||||
fs.list(conn_def['config'].get('root_path', '/'))
|
||||
return jsonify({'success': True})
|
||||
except Exception as e:
|
||||
return jsonify({'success': False, 'error': str(e)})
|
||||
|
||||
# ─── API Fichiers ─────────────────────────────────────────────────────────────
|
||||
@app.route('/api/list')
|
||||
@login_required
|
||||
def list_dir():
|
||||
connection_id = request.args.get('connection', 'seedbox')
|
||||
path = request.args.get('path', '')
|
||||
force = request.args.get('force', 'false').lower() == 'true'
|
||||
cache_key = f"{connection_id}:{path}"
|
||||
if not force:
|
||||
cached = cache_get(cache_key)
|
||||
if cached is not None:
|
||||
return jsonify(cached)
|
||||
try:
|
||||
fs, conn_def = get_fs(connection_id)
|
||||
root = conn_def['config'].get('root_path', '/')
|
||||
if not path:
|
||||
path = root
|
||||
conn_lock = _get_conn_lock(connection_id)
|
||||
with conn_lock:
|
||||
items = fs.list(path)
|
||||
result = {'items': items, 'path': path, 'readonly': conn_def.get('readonly', False)}
|
||||
cache_set(cache_key, result)
|
||||
# Prefetch uniquement pour les FS locaux (SFTP non thread-safe sur même connexion)
|
||||
if conn_def.get('type') == 'local':
|
||||
subdirs = [i['path'] for i in items if i['is_dir']]
|
||||
if subdirs:
|
||||
def prefetch():
|
||||
for sd in subdirs[:5]:
|
||||
k = f"{connection_id}:{sd}"
|
||||
if cache_get(k) is None:
|
||||
try:
|
||||
i2 = fs.list(sd)
|
||||
cache_set(k, {'items': i2, 'path': sd,
|
||||
'readonly': conn_def.get('readonly', False)})
|
||||
except Exception:
|
||||
pass
|
||||
eventlet.sleep(0)
|
||||
eventlet.spawn(prefetch)
|
||||
return jsonify(result)
|
||||
except Exception as e:
|
||||
return jsonify({'error': str(e), 'items': []})
|
||||
|
||||
@app.route('/api/mkdir', methods=['POST'])
|
||||
@login_required
|
||||
def mkdir():
|
||||
data = request.json
|
||||
connection_id = data.get('connection', 'nas')
|
||||
path = data.get('path', '')
|
||||
name = data.get('name', '').strip()
|
||||
if not name or '/' in name or '..' in name:
|
||||
return jsonify({'error': 'Nom invalide'}), 400
|
||||
try:
|
||||
fs, conn_def = get_fs(connection_id)
|
||||
if conn_def.get('readonly'):
|
||||
return jsonify({'error': 'Connexion en lecture seule'}), 403
|
||||
fs.mkdir(fs.join(path, name))
|
||||
cache_invalidate(connection_id, path)
|
||||
return jsonify({'success': True})
|
||||
except Exception as e:
|
||||
return jsonify({'error': str(e)}), 500
|
||||
|
||||
@app.route('/api/rename', methods=['POST'])
|
||||
@login_required
|
||||
def rename():
|
||||
data = request.json
|
||||
connection_id = data.get('connection', 'nas')
|
||||
old_path = data.get('old_path', '')
|
||||
new_name = data.get('new_name', '').strip()
|
||||
if not new_name or '/' in new_name or '..' in new_name:
|
||||
return jsonify({'error': 'Nom invalide'}), 400
|
||||
try:
|
||||
fs, conn_def = get_fs(connection_id)
|
||||
if conn_def.get('readonly'):
|
||||
return jsonify({'error': 'Connexion en lecture seule'}), 403
|
||||
parent = fs.dirname(old_path)
|
||||
new_path = fs.join(parent, new_name)
|
||||
fs.rename(old_path, new_path)
|
||||
cache_invalidate(connection_id, parent)
|
||||
return jsonify({'success': True})
|
||||
except Exception as e:
|
||||
return jsonify({'error': str(e)}), 500
|
||||
|
||||
# ─── API Queue ────────────────────────────────────────────────────────────────
|
||||
@app.route('/api/queue/add', methods=['POST'])
|
||||
@login_required
|
||||
def add_to_queue():
|
||||
global transfer_thread
|
||||
data = request.json
|
||||
src = data.get('src')
|
||||
dst = data.get('dst')
|
||||
src_connection = data.get('src_connection', 'seedbox')
|
||||
dst_connection = data.get('dst_connection', 'nas')
|
||||
name = data.get('name', '')
|
||||
if not src or not dst:
|
||||
return jsonify({'error': 'src et dst requis'}), 400
|
||||
conns = _load_connections()
|
||||
dst_conn = conns.get(dst_connection, {})
|
||||
if dst_conn.get('readonly'):
|
||||
return jsonify({'error': 'Destination en lecture seule'}), 403
|
||||
item_name = name or src.split('/')[-1]
|
||||
force = data.get('force', False)
|
||||
|
||||
# Vérifier si le fichier/dossier existe déjà à destination
|
||||
if not force:
|
||||
try:
|
||||
dst_fs, _ = get_fs(dst_connection)
|
||||
dst_full = dst_fs.join(dst, item_name)
|
||||
if dst_fs.exists(dst_full):
|
||||
try:
|
||||
existing_size = dst_fs.get_total_size(dst_full)
|
||||
except Exception:
|
||||
existing_size = 0
|
||||
return jsonify({
|
||||
'exists': True,
|
||||
'name': item_name,
|
||||
'dst_path': dst_full,
|
||||
'existing_size': existing_size
|
||||
})
|
||||
except Exception:
|
||||
pass # En cas d'erreur de vérification, on laisse passer
|
||||
|
||||
transfer_id = f"t_{int(time.time() * 1000)}"
|
||||
transfer = {
|
||||
'id': transfer_id, 'src': src, 'dst': dst,
|
||||
'src_connection': src_connection, 'dst_connection': dst_connection,
|
||||
'name': item_name
|
||||
}
|
||||
with transfer_lock:
|
||||
transfer_queue.append(transfer)
|
||||
queue_snapshot = [{'id': t['id'], 'name': t['name']} for t in transfer_queue]
|
||||
socketio.emit('queue_updated', {'queue': queue_snapshot})
|
||||
if transfer_thread is None or not transfer_thread.is_alive():
|
||||
transfer_thread = threading.Thread(target=queue_worker, daemon=True)
|
||||
transfer_thread.start()
|
||||
return jsonify({'success': True, 'id': transfer_id})
|
||||
|
||||
@app.route('/api/queue/remove', methods=['POST'])
|
||||
@login_required
|
||||
def remove_from_queue():
|
||||
data = request.json
|
||||
transfer_id = data.get('id')
|
||||
with transfer_lock:
|
||||
transfer_queue[:] = [t for t in transfer_queue if t['id'] != transfer_id]
|
||||
queue_snapshot = [{'id': t['id'], 'name': t['name']} for t in transfer_queue]
|
||||
socketio.emit('queue_updated', {'queue': queue_snapshot})
|
||||
return jsonify({'success': True})
|
||||
|
||||
@app.route('/api/queue')
|
||||
@login_required
|
||||
def get_queue():
|
||||
with transfer_lock:
|
||||
queue_snapshot = [{'id': t['id'], 'name': t['name']} for t in transfer_queue]
|
||||
cur = {
|
||||
'id': current_transfer['id'], 'name': current_transfer['name'],
|
||||
'percent': current_percent
|
||||
} if current_transfer else None
|
||||
return jsonify({'queue': queue_snapshot, 'current': cur, 'paused': transfer_pause})
|
||||
|
||||
@app.route('/api/transfer/stop', methods=['POST'])
|
||||
@login_required
|
||||
def transfer_stop_route():
|
||||
global transfer_stop, transfer_pause
|
||||
transfer_stop = True
|
||||
transfer_pause = False
|
||||
return jsonify({'success': True})
|
||||
|
||||
@app.route('/api/transfer/pause', methods=['POST'])
|
||||
@login_required
|
||||
def transfer_pause_route():
|
||||
global transfer_pause
|
||||
transfer_pause = not transfer_pause
|
||||
socketio.emit('transfer_paused', {'paused': transfer_pause})
|
||||
return jsonify({'success': True, 'paused': transfer_pause})
|
||||
|
||||
@app.route('/api/panels/default', methods=['GET'])
|
||||
@login_required
|
||||
def get_default_panels():
|
||||
conns = _load_connections()
|
||||
defaults = conns.get('__defaults__', {
|
||||
'left': {'connection': 'seedbox', 'path': ''},
|
||||
'right': {'connection': 'nas', 'path': ''}
|
||||
})
|
||||
return jsonify(defaults)
|
||||
|
||||
@app.route('/api/panels/default', methods=['POST'])
|
||||
@login_required
|
||||
def save_default_panels():
|
||||
data = request.json
|
||||
with connections_lock:
|
||||
conns = _load_connections()
|
||||
conns['__defaults__'] = {
|
||||
'left': {'connection': data.get('left_connection', 'seedbox'),
|
||||
'path': data.get('left_path', '')},
|
||||
'right': {'connection': data.get('right_connection', 'nas'),
|
||||
'path': data.get('right_path', '')}
|
||||
}
|
||||
_save_connections(conns)
|
||||
return jsonify({'success': True})
|
||||
|
||||
@app.route('/api/history')
|
||||
@login_required
|
||||
def get_history():
|
||||
return jsonify(_load_history())
|
||||
|
||||
@app.route('/api/history/clear', methods=['POST'])
|
||||
@login_required
|
||||
def clear_history():
|
||||
with history_lock:
|
||||
_save_history([])
|
||||
return jsonify({'success': True})
|
||||
|
||||
@app.route('/api/plugins')
|
||||
@login_required
|
||||
def get_plugins():
|
||||
return jsonify(plugin_registry.list_plugins())
|
||||
|
||||
if __name__ == '__main__':
|
||||
socketio.run(app, host='0.0.0.0', port=5000, debug=False, allow_unsafe_werkzeug=True)
|
||||
43
data/connections.json
Normal file
43
data/connections.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"seedbox": {
|
||||
"id": "seedbox",
|
||||
"name": "Seedbox",
|
||||
"type": "local",
|
||||
"readonly": true,
|
||||
"config": {
|
||||
"root_path": "/mnt/seedbox"
|
||||
}
|
||||
},
|
||||
"nas": {
|
||||
"id": "nas",
|
||||
"name": "NAS",
|
||||
"type": "local",
|
||||
"readonly": false,
|
||||
"config": {
|
||||
"root_path": "/mnt/nas"
|
||||
}
|
||||
},
|
||||
"fdcdfbe5": {
|
||||
"id": "fdcdfbe5",
|
||||
"name": "Useed SFTP",
|
||||
"type": "sftp",
|
||||
"readonly": true,
|
||||
"config": {
|
||||
"host": "geco.useed.me",
|
||||
"port": 13616,
|
||||
"username": "SB2269",
|
||||
"password": "rimyW9VSmlkqsd",
|
||||
"root_path": "/SB2269/torrent"
|
||||
}
|
||||
},
|
||||
"__defaults__": {
|
||||
"left": {
|
||||
"connection": "fdcdfbe5",
|
||||
"path": ""
|
||||
},
|
||||
"right": {
|
||||
"connection": "nas",
|
||||
"path": "/mnt/nas"
|
||||
}
|
||||
}
|
||||
}
|
||||
178
data/history.json
Normal file
178
data/history.json
Normal file
@@ -0,0 +1,178 @@
|
||||
[
|
||||
{
|
||||
"name": "Traques.S01E02.FRENCH.AD.1080p.WEBrip.EAC3.5.1.x265-TyHD.mkv",
|
||||
"src": "/SB2269/torrent/Series/Traques.S01E02.FRENCH.AD.1080p.WEBrip.EAC3.5.1.x265-TyHD.mkv",
|
||||
"dst": "/mnt/nas/Media/Video/Series/Traques/Traques - S1",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 898963075,
|
||||
"duration": 51.5,
|
||||
"date": "2026-03-11 06:00:15",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Traques.S01E01.FRENCH.AD.1080p.WEBrip.EAC3.5.1.x265-TyHD.mkv",
|
||||
"src": "/SB2269/torrent/Series/Traques.S01E01.FRENCH.AD.1080p.WEBrip.EAC3.5.1.x265-TyHD.mkv",
|
||||
"dst": "/mnt/nas/Media/Video/Series/Traques/Traques - S1",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 890462705,
|
||||
"duration": 48.3,
|
||||
"date": "2026-03-11 05:59:23",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Stranger.Things.S05E08.MULTI.1080p.WEBrip.x265-TyHD.mkv",
|
||||
"src": "/SB2269/torrent/Series/Stranger.Things.S05E08.MULTI.1080p.WEBrip.x265-TyHD.mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 2624750852,
|
||||
"duration": 136.6,
|
||||
"date": "2026-03-10 21:54:22",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Arco.2025.MULTi.TRUEFRENCH.2160p.HDR.DV.WEB-DL.H265-Slay3R.mkv",
|
||||
"src": "/SB2269/torrent/Films/Arco.2025.MULTi.TRUEFRENCH.2160p.HDR.DV.WEB-DL.H265-Slay3R.mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 16755361521,
|
||||
"duration": 862.2,
|
||||
"date": "2026-03-10 15:55:32",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Roofman.2025.MULTi.TRUEFRENCH.1080p.WEB.H265-SUPPLY",
|
||||
"src": "/SB2269/torrent/Films/Roofman.2025.MULTi.TRUEFRENCH.1080p.WEB.H265-SUPPLY",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 3589620970,
|
||||
"duration": 198.7,
|
||||
"date": "2026-03-10 15:37:20",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Deadpool and Wolverine (2024) Hybrid MULTi VFF 2160p 10bit 4KLight DV HDR10Plus BluRay DDP 7.1 x265-QTZ.mkv",
|
||||
"src": "/SB2269/torrent/Films/Deadpool and Wolverine (2024) Hybrid MULTi VFF 2160p 10bit 4KLight DV HDR10Plus BluRay DDP 7.1 x265-QTZ.mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 7585847557,
|
||||
"duration": 417.4,
|
||||
"date": "2026-03-10 15:24:36",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Hellboy II (2008) Les L\u00e9gions d'Or Maudites [2160p HDR10 x265 MULTI VFF 5.1 DTS VO 7.1 HDMA X].mkv",
|
||||
"src": "/SB2269/torrent/Films/Hellboy II (2008) Les L\u00e9gions d'Or Maudites [2160p HDR10 x265 MULTI VFF 5.1 DTS VO 7.1 HDMA X].mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 14091470311,
|
||||
"duration": 753.2,
|
||||
"date": "2026-03-10 15:11:56",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Fantastic Four 2005 MULTi VFF 1080p BluRay E-AC3 x265-Winks.mkv",
|
||||
"src": "/SB2269/torrent/Films/Fantastic Four 2005 MULTi VFF 1080p BluRay E-AC3 x265-Winks.mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 5097511665,
|
||||
"duration": 290.8,
|
||||
"date": "2026-03-10 14:54:27",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Avatar 2009 MULTi VFF 1080p DNSP WEB x265 HDR10+ DDP 5.1-Decha",
|
||||
"src": "/SB2269/torrent/Films/Avatar 2009 MULTi VFF 1080p DNSP WEB x265 HDR10+ DDP 5.1-Decha",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 7021710760,
|
||||
"duration": 373.9,
|
||||
"date": "2026-03-10 14:47:16",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Asterix & Obelix Mission Cleopatre (2002) VOF 2160p 10bit 4KLight DV HDR BluRay DDP 7.1 x265-QTZ.mkv",
|
||||
"src": "/SB2269/torrent/Films/Asterix & Obelix Mission Cleopatre (2002) VOF 2160p 10bit 4KLight DV HDR BluRay DDP 7.1 x265-QTZ.mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 5559752956,
|
||||
"duration": 297.9,
|
||||
"date": "2026-03-10 14:38:20",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "13.jours.13.nuits.2025.FRENCH.AD.1080p.WEB.H265-TyHD.mkv",
|
||||
"src": "/SB2269/torrent/Films/13.jours.13.nuits.2025.FRENCH.AD.1080p.WEB.H265-TyHD.mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 2094587907,
|
||||
"duration": 113.5,
|
||||
"date": "2026-03-10 14:33:22",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Igorrr - Amen (2025)",
|
||||
"src": "/SB2269/torrent/Music/Igorrr - Amen (2025)",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 108724752,
|
||||
"duration": 11.8,
|
||||
"date": "2026-03-10 13:58:06",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Asterix & Obelix Mission Cleopatre (2002) VOF 2160p 10bit 4KLight DV HDR BluRay DDP 7.1 x265-QTZ.mkv",
|
||||
"src": "/SB2269/torrent/Films/Asterix & Obelix Mission Cleopatre (2002) VOF 2160p 10bit 4KLight DV HDR BluRay DDP 7.1 x265-QTZ.mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 0,
|
||||
"duration": 96.4,
|
||||
"date": "2026-03-10 13:53:48",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "13.jours.13.nuits.2025.FRENCH.AD.1080p.WEB.H265-TyHD.mkv",
|
||||
"src": "/SB2269/torrent/Films/13.jours.13.nuits.2025.FRENCH.AD.1080p.WEB.H265-TyHD.mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 2094587907,
|
||||
"duration": 153.8,
|
||||
"date": "2026-03-10 12:41:46",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "13.jours.13.nuits.2025.FRENCH.AD.1080p.WEB.H265-TyHD.mkv",
|
||||
"src": "/SB2269/torrent/Films/13.jours.13.nuits.2025.FRENCH.AD.1080p.WEB.H265-TyHD.mkv",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 2094587907,
|
||||
"duration": 143.4,
|
||||
"date": "2026-03-10 12:34:56",
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"name": "Igorrr - Amen (2025)",
|
||||
"src": "/SB2269/torrent/Music/Igorrr - Amen (2025)",
|
||||
"dst": "/mnt/nas/test",
|
||||
"src_connection": "fdcdfbe5",
|
||||
"dst_connection": "nas",
|
||||
"size": 108724752,
|
||||
"duration": 11.2,
|
||||
"date": "2026-03-10 12:32:20",
|
||||
"success": true
|
||||
}
|
||||
]
|
||||
22
docker-compose.yml
Normal file
22
docker-compose.yml
Normal file
@@ -0,0 +1,22 @@
|
||||
services:
|
||||
seedmover:
|
||||
build: .
|
||||
container_name: seedmover-2
|
||||
ports:
|
||||
- 5019:5000
|
||||
environment:
|
||||
- SEEDBOX_PATH=/mnt/seedbox
|
||||
- NAS_PATH=/mnt/nas
|
||||
- APP_TITLE=SeedMover
|
||||
- SECRET_KEY=049b72cbe3e3f6dbbb242b30a3087dda9290ddaf1ab10a53ae160d09146b33e5
|
||||
- APP_USERNAME=admin
|
||||
- APP_PASSWORD=changeme
|
||||
- SESSION_LIFETIME=86400
|
||||
- CACHE_TTL=60
|
||||
- DATA_DIR=/app/data
|
||||
volumes:
|
||||
- /mnt/nas/Useed/Media_useed:/mnt/seedbox:ro
|
||||
- /mnt/nas/BOB4-Syno:/mnt/nas
|
||||
- ./data:/app/data # persistance des connexions
|
||||
restart: unless-stopped
|
||||
networks: {}
|
||||
59
plugins/__init__.py
Normal file
59
plugins/__init__.py
Normal file
@@ -0,0 +1,59 @@
|
||||
"""
|
||||
Registre de plugins SeedMover.
|
||||
Tout plugin placé dans ce dossier et héritant de AbstractFS
|
||||
est automatiquement découvert et enregistré.
|
||||
"""
|
||||
import importlib
|
||||
import os
|
||||
from .base import AbstractFS
|
||||
|
||||
_registry: dict[str, type] = {}
|
||||
|
||||
|
||||
def _discover():
|
||||
"""Scan le dossier plugins/ et importe chaque module."""
|
||||
plugins_dir = os.path.dirname(__file__)
|
||||
for fname in os.listdir(plugins_dir):
|
||||
if fname.startswith('_') or not fname.endswith('.py'):
|
||||
continue
|
||||
module_name = fname[:-3]
|
||||
if module_name in ('base',):
|
||||
continue
|
||||
try:
|
||||
mod = importlib.import_module(f'.{module_name}', package='plugins')
|
||||
for attr in dir(mod):
|
||||
cls = getattr(mod, attr)
|
||||
if (
|
||||
isinstance(cls, type)
|
||||
and issubclass(cls, AbstractFS)
|
||||
and cls is not AbstractFS
|
||||
and cls.PLUGIN_NAME
|
||||
):
|
||||
_registry[cls.PLUGIN_NAME] = cls
|
||||
except Exception as e:
|
||||
print(f"[plugins] Impossible de charger {fname}: {e}")
|
||||
|
||||
|
||||
def get_plugin(name: str) -> type:
|
||||
"""Retourne la classe plugin pour un nom donné."""
|
||||
if not _registry:
|
||||
_discover()
|
||||
return _registry.get(name)
|
||||
|
||||
|
||||
def list_plugins() -> list:
|
||||
"""Retourne la liste des plugins disponibles avec leur config."""
|
||||
if not _registry:
|
||||
_discover()
|
||||
return [
|
||||
{
|
||||
'name': cls.PLUGIN_NAME,
|
||||
'label': cls.PLUGIN_LABEL,
|
||||
'fields': cls.get_config_fields()
|
||||
}
|
||||
for cls in _registry.values()
|
||||
]
|
||||
|
||||
|
||||
# Découverte au chargement du module
|
||||
_discover()
|
||||
140
plugins/base.py
Normal file
140
plugins/base.py
Normal file
@@ -0,0 +1,140 @@
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
|
||||
class AbstractFS(ABC):
|
||||
"""
|
||||
Interface abstraite pour tous les plugins de système de fichiers.
|
||||
Pour créer un nouveau plugin :
|
||||
1. Créer plugins/monplugin.py
|
||||
2. Hériter de AbstractFS
|
||||
3. Implémenter toutes les méthodes abstraites
|
||||
4. Définir PLUGIN_NAME et PLUGIN_LABEL
|
||||
Le plugin sera automatiquement découvert au démarrage.
|
||||
"""
|
||||
|
||||
PLUGIN_NAME = None # identifiant interne ex: "sftp"
|
||||
PLUGIN_LABEL = None # label affiché ex: "SFTP"
|
||||
|
||||
# ─── Cycle de vie ────────────────────────────────────────────
|
||||
|
||||
@abstractmethod
|
||||
def connect(self, config: dict):
|
||||
"""Établir la connexion avec la config fournie."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def disconnect(self):
|
||||
"""Fermer proprement la connexion."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def is_connected(self) -> bool:
|
||||
"""Retourner True si la connexion est active."""
|
||||
pass
|
||||
|
||||
# ─── Navigation ──────────────────────────────────────────────
|
||||
|
||||
@abstractmethod
|
||||
def list(self, path: str) -> list:
|
||||
"""
|
||||
Lister le contenu d'un dossier.
|
||||
Retourne une liste de dicts :
|
||||
{ name, path, is_dir, size, mtime }
|
||||
Triés : dossiers d'abord, puis fichiers, alphabétique.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def isdir(self, path: str) -> bool:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def exists(self, path: str) -> bool:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def getsize(self, path: str) -> int:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def join(self, *parts) -> str:
|
||||
"""Équivalent os.path.join pour ce FS."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def basename(self, path: str) -> str:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def dirname(self, path: str) -> str:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def relpath(self, path: str, base: str) -> str:
|
||||
pass
|
||||
|
||||
# ─── Opérations ──────────────────────────────────────────────
|
||||
|
||||
@abstractmethod
|
||||
def mkdir(self, path: str):
|
||||
"""Créer un dossier (et les parents si nécessaire)."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def rename(self, old_path: str, new_path: str):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def remove(self, path: str):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def walk(self, path: str):
|
||||
"""
|
||||
Générateur identique à os.walk :
|
||||
yield (root, dirs, files)
|
||||
"""
|
||||
pass
|
||||
|
||||
# ─── Transfert ───────────────────────────────────────────────
|
||||
|
||||
@abstractmethod
|
||||
def read_chunks(self, path: str, chunk_size: int = 4 * 1024 * 1024):
|
||||
"""
|
||||
Générateur qui yield des bytes chunk par chunk.
|
||||
Utilisé par le moteur de copie.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def write_chunks(self, path: str, chunks):
|
||||
"""
|
||||
Écrire un fichier à partir d'un générateur de chunks bytes.
|
||||
Utilisé par le moteur de copie.
|
||||
"""
|
||||
pass
|
||||
|
||||
def get_total_size(self, path: str) -> int:
|
||||
"""Taille totale d'un fichier ou dossier récursif."""
|
||||
if not self.isdir(path):
|
||||
return self.getsize(path)
|
||||
total = 0
|
||||
for root, dirs, files in self.walk(path):
|
||||
for f in files:
|
||||
try:
|
||||
total += self.getsize(self.join(root, f))
|
||||
except Exception:
|
||||
pass
|
||||
return total
|
||||
|
||||
# ─── Métadonnées du plugin ────────────────────────────────────
|
||||
|
||||
@classmethod
|
||||
def get_config_fields(cls) -> list:
|
||||
"""
|
||||
Retourne la liste des champs de config nécessaires pour ce plugin.
|
||||
Chaque champ : { name, label, type, required, default }
|
||||
type : "text" | "password" | "number" | "file"
|
||||
Surcharger dans chaque plugin.
|
||||
"""
|
||||
return []
|
||||
111
plugins/local.py
Normal file
111
plugins/local.py
Normal file
@@ -0,0 +1,111 @@
|
||||
import os
|
||||
from .base import AbstractFS
|
||||
|
||||
|
||||
class LocalFS(AbstractFS):
|
||||
|
||||
PLUGIN_NAME = "local"
|
||||
PLUGIN_LABEL = "Local"
|
||||
|
||||
def __init__(self):
|
||||
self._root = "/"
|
||||
self._connected = False
|
||||
|
||||
def connect(self, config: dict):
|
||||
self._root = config.get("root_path", "/")
|
||||
self._connected = True
|
||||
|
||||
def disconnect(self):
|
||||
self._connected = False
|
||||
|
||||
def is_connected(self) -> bool:
|
||||
return self._connected
|
||||
|
||||
# ─── Navigation ──────────────────────────────────────────────
|
||||
|
||||
def list(self, path: str) -> list:
|
||||
entries = sorted(os.scandir(path), key=lambda e: (not e.is_dir(), e.name.lower()))
|
||||
items = []
|
||||
for entry in entries:
|
||||
try:
|
||||
stat = entry.stat()
|
||||
items.append({
|
||||
'name': entry.name,
|
||||
'path': entry.path,
|
||||
'is_dir': entry.is_dir(),
|
||||
'size': stat.st_size if not entry.is_dir() else 0,
|
||||
'mtime': stat.st_mtime,
|
||||
})
|
||||
except PermissionError:
|
||||
continue
|
||||
return items
|
||||
|
||||
def isdir(self, path: str) -> bool:
|
||||
return os.path.isdir(path)
|
||||
|
||||
def exists(self, path: str) -> bool:
|
||||
return os.path.exists(path)
|
||||
|
||||
def getsize(self, path: str) -> int:
|
||||
return os.path.getsize(path)
|
||||
|
||||
def join(self, *parts) -> str:
|
||||
return os.path.join(*parts)
|
||||
|
||||
def basename(self, path: str) -> str:
|
||||
return os.path.basename(path)
|
||||
|
||||
def dirname(self, path: str) -> str:
|
||||
return os.path.dirname(path)
|
||||
|
||||
def relpath(self, path: str, base: str) -> str:
|
||||
return os.path.relpath(path, base)
|
||||
|
||||
# ─── Opérations ──────────────────────────────────────────────
|
||||
|
||||
def mkdir(self, path: str):
|
||||
os.makedirs(path, exist_ok=True)
|
||||
|
||||
def rename(self, old_path: str, new_path: str):
|
||||
os.rename(old_path, new_path)
|
||||
|
||||
def remove(self, path: str):
|
||||
if os.path.isdir(path):
|
||||
import shutil
|
||||
shutil.rmtree(path)
|
||||
else:
|
||||
os.remove(path)
|
||||
|
||||
def walk(self, path: str):
|
||||
yield from os.walk(path)
|
||||
|
||||
# ─── Transfert ───────────────────────────────────────────────
|
||||
|
||||
def read_chunks(self, path: str, chunk_size: int = 4 * 1024 * 1024):
|
||||
with open(path, 'rb') as f:
|
||||
while True:
|
||||
buf = f.read(chunk_size)
|
||||
if not buf:
|
||||
break
|
||||
yield buf
|
||||
|
||||
def write_chunks(self, path: str, chunks):
|
||||
os.makedirs(os.path.dirname(path), exist_ok=True)
|
||||
with open(path, 'wb') as f:
|
||||
for chunk in chunks:
|
||||
f.write(chunk)
|
||||
|
||||
# ─── Config ──────────────────────────────────────────────────
|
||||
|
||||
@classmethod
|
||||
def get_config_fields(cls) -> list:
|
||||
return [
|
||||
{
|
||||
'name': 'root_path',
|
||||
'label': 'Chemin racine',
|
||||
'type': 'text',
|
||||
'required': True,
|
||||
'default': '/',
|
||||
'placeholder': '/mnt/nas'
|
||||
}
|
||||
]
|
||||
202
plugins/sftp.py
Normal file
202
plugins/sftp.py
Normal file
@@ -0,0 +1,202 @@
|
||||
import os
|
||||
import stat
|
||||
import posixpath
|
||||
import paramiko
|
||||
from .base import AbstractFS
|
||||
|
||||
|
||||
class SFTPfs(AbstractFS):
|
||||
|
||||
PLUGIN_NAME = "sftp"
|
||||
PLUGIN_LABEL = "SFTP"
|
||||
|
||||
def __init__(self):
|
||||
self._client = None # paramiko SSHClient
|
||||
self._sftp = None # paramiko SFTPClient
|
||||
self._root = "/"
|
||||
self._connected = False
|
||||
|
||||
def connect(self, config: dict):
|
||||
host = config['host']
|
||||
port = int(config.get('port', 22))
|
||||
username = config['username']
|
||||
password = config.get('password') or None
|
||||
key_path = config.get('key_path') or None
|
||||
self._root = config.get('root_path', '/')
|
||||
|
||||
self._client = paramiko.SSHClient()
|
||||
self._client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
|
||||
|
||||
connect_kwargs = dict(hostname=host, port=port, username=username, timeout=10)
|
||||
if key_path:
|
||||
connect_kwargs['key_filename'] = key_path
|
||||
if password:
|
||||
connect_kwargs['password'] = password
|
||||
|
||||
self._client.connect(**connect_kwargs)
|
||||
self._sftp = self._client.open_sftp()
|
||||
self._connected = True
|
||||
|
||||
def disconnect(self):
|
||||
try:
|
||||
if self._sftp:
|
||||
self._sftp.close()
|
||||
if self._client:
|
||||
self._client.close()
|
||||
except Exception:
|
||||
pass
|
||||
self._sftp = None
|
||||
self._client = None
|
||||
self._connected = False
|
||||
|
||||
def is_connected(self) -> bool:
|
||||
if not self._connected or not self._sftp:
|
||||
return False
|
||||
try:
|
||||
self._sftp.stat('.')
|
||||
return True
|
||||
except Exception:
|
||||
self._connected = False
|
||||
return False
|
||||
|
||||
def _reconnect_if_needed(self, config):
|
||||
if not self.is_connected():
|
||||
self.connect(config)
|
||||
|
||||
# ─── Navigation ──────────────────────────────────────────────
|
||||
|
||||
def list(self, path: str) -> list:
|
||||
items = []
|
||||
for attr in self._sftp.listdir_attr(path):
|
||||
is_dir = stat.S_ISDIR(attr.st_mode)
|
||||
items.append({
|
||||
'name': attr.filename,
|
||||
'path': posixpath.join(path, attr.filename),
|
||||
'is_dir': is_dir,
|
||||
'size': attr.st_size if not is_dir else 0,
|
||||
'mtime': attr.st_mtime,
|
||||
})
|
||||
items.sort(key=lambda e: (not e['is_dir'], e['name'].lower()))
|
||||
return items
|
||||
|
||||
def isdir(self, path: str) -> bool:
|
||||
try:
|
||||
return stat.S_ISDIR(self._sftp.stat(path).st_mode)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def exists(self, path: str) -> bool:
|
||||
try:
|
||||
self._sftp.stat(path)
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def getsize(self, path: str) -> int:
|
||||
return self._sftp.stat(path).st_size
|
||||
|
||||
def join(self, *parts) -> str:
|
||||
return posixpath.join(*parts)
|
||||
|
||||
def basename(self, path: str) -> str:
|
||||
return posixpath.basename(path)
|
||||
|
||||
def dirname(self, path: str) -> str:
|
||||
return posixpath.dirname(path)
|
||||
|
||||
def relpath(self, path: str, base: str) -> str:
|
||||
# posixpath n'a pas relpath, on le simule
|
||||
if path.startswith(base):
|
||||
rel = path[len(base):]
|
||||
return rel.lstrip('/')
|
||||
return path
|
||||
|
||||
# ─── Opérations ──────────────────────────────────────────────
|
||||
|
||||
def mkdir(self, path: str):
|
||||
"""Crée le dossier et tous les parents manquants."""
|
||||
parts = path.split('/')
|
||||
current = ''
|
||||
for part in parts:
|
||||
if not part:
|
||||
current = '/'
|
||||
continue
|
||||
current = posixpath.join(current, part)
|
||||
try:
|
||||
self._sftp.stat(current)
|
||||
except IOError:
|
||||
self._sftp.mkdir(current)
|
||||
|
||||
def rename(self, old_path: str, new_path: str):
|
||||
self._sftp.rename(old_path, new_path)
|
||||
|
||||
def remove(self, path: str):
|
||||
if self.isdir(path):
|
||||
for attr in self._sftp.listdir_attr(path):
|
||||
child = posixpath.join(path, attr.filename)
|
||||
if stat.S_ISDIR(attr.st_mode):
|
||||
self.remove(child)
|
||||
else:
|
||||
self._sftp.remove(child)
|
||||
self._sftp.rmdir(path)
|
||||
else:
|
||||
self._sftp.remove(path)
|
||||
|
||||
def walk(self, path: str):
|
||||
"""os.walk équivalent pour SFTP."""
|
||||
dirs = []
|
||||
files = []
|
||||
for attr in self._sftp.listdir_attr(path):
|
||||
if stat.S_ISDIR(attr.st_mode):
|
||||
dirs.append(attr.filename)
|
||||
else:
|
||||
files.append(attr.filename)
|
||||
yield path, dirs, files
|
||||
for d in dirs:
|
||||
yield from self.walk(posixpath.join(path, d))
|
||||
|
||||
# ─── Transfert ───────────────────────────────────────────────
|
||||
|
||||
def read_chunks(self, path: str, chunk_size: int = 4 * 1024 * 1024):
|
||||
# Limiter le pipelining Paramiko pour contrôler la RAM
|
||||
import paramiko
|
||||
old_max = paramiko.sftp_file.SFTPFile.MAX_REQUEST_SIZE
|
||||
paramiko.sftp_file.SFTPFile.MAX_REQUEST_SIZE = 32768 # 32KB natif SFTP
|
||||
import gc
|
||||
SFTP_BLOCK = 32768
|
||||
try:
|
||||
with self._sftp.open(path, 'rb') as f:
|
||||
accumulated = bytearray()
|
||||
while True:
|
||||
block = f.read(SFTP_BLOCK)
|
||||
if not block:
|
||||
break
|
||||
accumulated += block
|
||||
if len(accumulated) >= chunk_size:
|
||||
data = bytes(accumulated)
|
||||
accumulated = bytearray()
|
||||
gc.collect() # forcer Python à rendre la mémoire
|
||||
yield data
|
||||
if accumulated:
|
||||
yield bytes(accumulated)
|
||||
finally:
|
||||
paramiko.sftp_file.SFTPFile.MAX_REQUEST_SIZE = old_max
|
||||
|
||||
def write_chunks(self, path: str, chunks):
|
||||
self.mkdir(posixpath.dirname(path))
|
||||
with self._sftp.open(path, 'wb') as f:
|
||||
for chunk in chunks:
|
||||
f.write(chunk)
|
||||
|
||||
# ─── Config ──────────────────────────────────────────────────
|
||||
|
||||
@classmethod
|
||||
def get_config_fields(cls) -> list:
|
||||
return [
|
||||
{'name': 'host', 'label': 'Hôte', 'type': 'text', 'required': True, 'placeholder': 'sftp.exemple.com'},
|
||||
{'name': 'port', 'label': 'Port', 'type': 'number', 'required': False, 'default': 22},
|
||||
{'name': 'username', 'label': 'Utilisateur', 'type': 'text', 'required': True, 'placeholder': 'user'},
|
||||
{'name': 'password', 'label': 'Mot de passe', 'type': 'password', 'required': False, 'placeholder': 'Laisser vide si clé SSH'},
|
||||
{'name': 'key_path', 'label': 'Clé SSH (chemin)', 'type': 'text', 'required': False, 'placeholder': '/root/.ssh/id_rsa'},
|
||||
{'name': 'root_path', 'label': 'Dossier racine', 'type': 'text', 'required': False, 'default': '/', 'placeholder': '/home/user'},
|
||||
]
|
||||
4
requirements.txt
Normal file
4
requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
flask==3.0.3
|
||||
flask-socketio==5.3.6
|
||||
eventlet==0.36.1
|
||||
paramiko==3.4.0
|
||||
1671
templates/index.html
Normal file
1671
templates/index.html
Normal file
File diff suppressed because it is too large
Load Diff
175
templates/login.html
Normal file
175
templates/login.html
Normal file
@@ -0,0 +1,175 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="fr">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>{{ title }} — Connexion</title>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;600;700&family=Syne:wght@400;700;800&display=swap" rel="stylesheet">
|
||||
<style>
|
||||
:root {
|
||||
--bg: #0d0f14;
|
||||
--surface: #161921;
|
||||
--surface2: #1e2330;
|
||||
--border: #2a3040;
|
||||
--accent: #00e5a0;
|
||||
--accent2: #0099ff;
|
||||
--danger: #ff4455;
|
||||
--text: #e2e8f0;
|
||||
--muted: #6b7a99;
|
||||
}
|
||||
|
||||
* { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
|
||||
body {
|
||||
background: var(--bg);
|
||||
color: var(--text);
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 13px;
|
||||
height: 100dvh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
/* Fond animé */
|
||||
body::before {
|
||||
content: '';
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
background:
|
||||
radial-gradient(ellipse 60% 40% at 20% 60%, rgba(0,229,160,0.05) 0%, transparent 70%),
|
||||
radial-gradient(ellipse 50% 40% at 80% 30%, rgba(0,153,255,0.05) 0%, transparent 70%);
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.card {
|
||||
background: var(--surface);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: 12px;
|
||||
padding: 40px 36px;
|
||||
width: 100%;
|
||||
max-width: 360px;
|
||||
position: relative;
|
||||
box-shadow: 0 24px 60px rgba(0,0,0,0.4);
|
||||
}
|
||||
|
||||
.card::before {
|
||||
content: '';
|
||||
position: absolute;
|
||||
top: 0; left: 24px; right: 24px;
|
||||
height: 1px;
|
||||
background: linear-gradient(90deg, transparent, var(--accent), transparent);
|
||||
opacity: 0.4;
|
||||
}
|
||||
|
||||
.logo {
|
||||
font-family: 'Syne', sans-serif;
|
||||
font-size: 24px;
|
||||
font-weight: 800;
|
||||
color: var(--accent);
|
||||
text-align: center;
|
||||
margin-bottom: 6px;
|
||||
letter-spacing: -0.5px;
|
||||
}
|
||||
|
||||
.logo span { color: var(--text); }
|
||||
|
||||
.subtitle {
|
||||
text-align: center;
|
||||
color: var(--muted);
|
||||
font-size: 11px;
|
||||
margin-bottom: 32px;
|
||||
letter-spacing: 1px;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.field {
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
|
||||
label {
|
||||
display: block;
|
||||
font-size: 10px;
|
||||
color: var(--muted);
|
||||
letter-spacing: 1px;
|
||||
text-transform: uppercase;
|
||||
margin-bottom: 6px;
|
||||
}
|
||||
|
||||
input {
|
||||
width: 100%;
|
||||
background: var(--bg);
|
||||
border: 1px solid var(--border);
|
||||
color: var(--text);
|
||||
padding: 10px 12px;
|
||||
border-radius: 6px;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 13px;
|
||||
outline: none;
|
||||
transition: border-color 0.15s, box-shadow 0.15s;
|
||||
}
|
||||
|
||||
input:focus {
|
||||
border-color: var(--accent);
|
||||
box-shadow: 0 0 0 3px rgba(0,229,160,0.1);
|
||||
}
|
||||
|
||||
.error {
|
||||
background: rgba(255,68,85,0.1);
|
||||
border: 1px solid rgba(255,68,85,0.3);
|
||||
color: var(--danger);
|
||||
padding: 10px 12px;
|
||||
border-radius: 6px;
|
||||
font-size: 11px;
|
||||
margin-bottom: 16px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.btn-submit {
|
||||
width: 100%;
|
||||
margin-top: 8px;
|
||||
padding: 11px;
|
||||
background: rgba(0,229,160,0.12);
|
||||
border: 1px solid var(--accent);
|
||||
color: var(--accent);
|
||||
border-radius: 6px;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
font-size: 13px;
|
||||
font-weight: 600;
|
||||
cursor: pointer;
|
||||
transition: background 0.15s;
|
||||
letter-spacing: 0.5px;
|
||||
}
|
||||
|
||||
.btn-submit:hover { background: rgba(0,229,160,0.22); }
|
||||
.btn-submit:active { background: rgba(0,229,160,0.3); }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<div class="card">
|
||||
<div class="logo">Seed<span>Mover</span></div>
|
||||
<div class="subtitle">Gestionnaire de transferts</div>
|
||||
|
||||
{% if error %}
|
||||
<div class="error">⚠ {{ error }}</div>
|
||||
{% endif %}
|
||||
|
||||
<form method="POST">
|
||||
<div class="field">
|
||||
<label for="username">Identifiant</label>
|
||||
<input type="text" id="username" name="username" autocomplete="username" autofocus />
|
||||
</div>
|
||||
<div class="field">
|
||||
<label for="password">Mot de passe</label>
|
||||
<input type="password" id="password" name="password" autocomplete="current-password" />
|
||||
</div>
|
||||
<button type="submit" class="btn-submit">Connexion →</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
Reference in New Issue
Block a user