🚀 Initial commit: NeuraTerm v1.0.0

Terminal IA professionnel avec support multi-providers et suivi des coûts

Fonctionnalités principales:
- Support OpenAI (ChatGPT) et Mistral AI
- Compteur de tokens et calcul des coûts en temps réel
- Statistiques détaillées par provider
- Interface professionnelle optimisée entreprise
- Architecture TypeScript modulaire

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Network Monitor Bot 2025-08-19 19:28:39 +02:00
commit 0b9bab45a8
23 changed files with 1470 additions and 0 deletions

View File

@ -0,0 +1,10 @@
{
"permissions": {
"allow": [
"Bash(mkdir:*)",
"Bash(git add:*)"
],
"deny": [],
"ask": []
}
}

24
.eslintrc.json Normal file
View File

@ -0,0 +1,24 @@
{
"env": {
"node": true,
"es2022": true
},
"extends": [
"eslint:recommended",
"@typescript-eslint/recommended"
],
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaVersion": "latest",
"sourceType": "module"
},
"plugins": [
"@typescript-eslint"
],
"rules": {
"semi": ["error", "always"],
"quotes": ["error", "single"],
"@typescript-eslint/no-explicit-any": "warn",
"@typescript-eslint/no-unused-vars": "error"
}
}

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
.divers

179
README.md Normal file
View File

@ -0,0 +1,179 @@
# 🧠 NeuraTerm
Terminal IA professionnel avec support multi-providers et suivi avancé des coûts.
## 🚀 Fonctionnalités
- **Support multi-providers** : OpenAI (ChatGPT) et Mistral AI
- **Suivi des coûts en temps réel** : Compteur de tokens et calcul précis des coûts
- **Statistiques détaillées** : Analyse de l'utilisation par provider
- **Interface professionnelle** : Terminal optimisé pour un usage professionnel
- **Configuration flexible** : Support des variables d'environnement et fichiers de configuration
## 📦 Installation
```bash
npm install -g neuraterm
```
Ou clonez le repo et compilez :
```bash
git clone <repo-url>
cd NeuraTerm
npm install
npm run build
npm start
```
## ⚙️ Configuration
### Variables d'environnement
```bash
export OPENAI_API_KEY="votre_clé_openai"
export MISTRAL_API_KEY="votre_clé_mistral"
```
### Fichier de configuration
Créez `~/.neuraterm/config.json` :
```json
{
"ai": {
"openai": {
"apiKey": "votre_clé_openai",
"model": "gpt-4o-mini"
},
"mistral": {
"apiKey": "votre_clé_mistral",
"model": "mistral-large-latest"
},
"defaultProvider": "openai"
},
"terminal": {
"theme": "dark",
"showTokenCount": true,
"showCost": true,
"autoSave": true
}
}
```
## 🎯 Utilisation
### Commandes de base
```bash
# Lancer NeuraTerm
neuraterm
# Aide
help
# Poser une question à l'IA
Comment optimiser mon code Python ?
# Changer de provider
provider mistral
# Voir les statistiques
stats
stats openai
cost
# Configuration
config
providers
```
### Exemples d'usage professionnel
```bash
# Analyse de code
Peux-tu analyser ce fichier Python et suggérer des améliorations ?
# Génération de tests
Génère des tests unitaires pour cette fonction JavaScript
# Optimisation
Comment réduire la complexité de cet algorithme ?
# Documentation
Écris une documentation technique pour cette API
```
## 📊 Suivi des coûts
NeuraTerm affiche automatiquement :
- Nombre de tokens utilisés (entrée → sortie)
- Coût par requête
- Coût total cumulé
- Temps de réponse moyen
- Statistiques par provider
## 🔧 Développement
```bash
# Installation des dépendances
npm install
# Développement avec rechargement
npm run dev
# Compilation
npm run build
# Tests
npm test
# Linting
npm run lint
```
## 📝 Modèles supportés
### OpenAI
- gpt-4o-mini (recommandé)
- gpt-4o
- gpt-4-turbo
- gpt-4
- gpt-3.5-turbo
### Mistral AI
- mistral-large-latest (recommandé)
- mistral-medium
- mistral-small
- codestral-latest
## 💰 Tarification (novembre 2024)
| Provider | Modèle | Entrée (/1K tokens) | Sortie (/1K tokens) |
|----------|--------|-------------------|-------------------|
| OpenAI | gpt-4o-mini | $0.00015 | $0.0006 |
| OpenAI | gpt-4o | $0.005 | $0.015 |
| Mistral | mistral-large-latest | $0.004 | $0.012 |
| Mistral | mistral-small | $0.002 | $0.006 |
## 🛠️ Architecture
- **TypeScript** : Typage fort et développement moderne
- **Modular** : Architecture modulaire extensible
- **Multi-provider** : Support facile de nouveaux providers IA
- **Professional** : Optimisé pour un usage d'entreprise
## 📄 Licence
MIT - Voir le fichier [LICENSE](LICENSE) pour plus de détails.
## 🤝 Contribution
Les contributions sont les bienvenues ! Consultez notre guide de contribution pour commencer.
## 📞 Support
Pour le support et les questions :
- Ouvrez une issue sur GitHub
- Consultez la documentation
- Contactez l'équipe de développement

52
package.json Normal file
View File

@ -0,0 +1,52 @@
{
"name": "neuraterm",
"version": "1.0.0",
"description": "Terminal IA professionnel avec support multi-providers (ChatGPT, Mistral) et suivi des coûts",
"main": "dist/src/cli.js",
"type": "module",
"bin": {
"neuraterm": "dist/src/cli.js"
},
"scripts": {
"build": "tsc",
"start": "node dist/src/cli.js",
"dev": "ts-node --esm src/cli.ts",
"test": "jest",
"lint": "eslint src",
"clean": "rm -rf dist"
},
"keywords": [
"ai",
"terminal",
"cli",
"assistant",
"chatgpt",
"mistral",
"openai",
"professional"
],
"author": "NeuraTerm Team",
"license": "MIT",
"dependencies": {
"node-fetch": "^3.3.1",
"open": "^9.1.0",
"chalk": "^5.3.0",
"inquirer": "^9.2.0",
"commander": "^11.0.0",
"conf": "^11.0.2"
},
"devDependencies": {
"@types/node": "^20.4.7",
"@types/inquirer": "^9.0.0",
"typescript": "^5.1.6",
"ts-node": "^10.9.1",
"eslint": "^8.46.0",
"@typescript-eslint/eslint-plugin": "^6.2.1",
"@typescript-eslint/parser": "^6.2.1",
"jest": "^29.6.0",
"@types/jest": "^29.5.0"
},
"engines": {
"node": ">=18.0.0"
}
}

167
src/ai/client.ts Normal file
View File

@ -0,0 +1,167 @@
/**
* Client IA principal avec gestion multi-providers
*/
import { OpenAIProvider } from './providers/openai.js';
import { MistralProvider } from './providers/mistral.js';
import { AIRequest, AIResponse, TokenUsageStats, ProviderConfig } from './types.js';
import { logger } from '../utils/logger.js';
export class AIClient {
private providers: Map<string, OpenAIProvider | MistralProvider> = new Map();
private currentProvider: string;
private usageStats: Map<string, TokenUsageStats> = new Map();
constructor(config: ProviderConfig) {
this.initializeProviders(config);
this.currentProvider = config.defaultProvider;
}
private initializeProviders(config: ProviderConfig) {
// Initialiser OpenAI si configuré
if (config.openai?.apiKey) {
const openaiProvider = new OpenAIProvider(
config.openai.apiKey,
config.openai.model || 'gpt-4o-mini',
config.openai.baseUrl
);
this.providers.set('openai', openaiProvider);
this.usageStats.set('openai', {
provider: 'openai',
model: config.openai.model || 'gpt-4o-mini',
totalRequests: 0,
totalInputTokens: 0,
totalOutputTokens: 0,
totalCost: 0,
averageResponseTime: 0,
lastUsed: new Date()
});
}
// Initialiser Mistral si configuré
if (config.mistral?.apiKey) {
const mistralProvider = new MistralProvider(
config.mistral.apiKey,
config.mistral.model || 'mistral-large-latest',
config.mistral.baseUrl
);
this.providers.set('mistral', mistralProvider);
this.usageStats.set('mistral', {
provider: 'mistral',
model: config.mistral.model || 'mistral-large-latest',
totalRequests: 0,
totalInputTokens: 0,
totalOutputTokens: 0,
totalCost: 0,
averageResponseTime: 0,
lastUsed: new Date()
});
}
if (this.providers.size === 0) {
throw new Error('Aucun provider IA configuré. Veuillez configurer au moins OpenAI ou Mistral.');
}
// Vérifier que le provider par défaut existe
if (!this.providers.has(config.defaultProvider)) {
const availableProviders = Array.from(this.providers.keys());
this.currentProvider = availableProviders[0];
logger.warn(`Provider par défaut '${config.defaultProvider}' non disponible. Utilisation de '${this.currentProvider}'`);
}
}
async generateResponse(request: AIRequest, providerName?: string): Promise<AIResponse> {
const provider = providerName || this.currentProvider;
if (!this.providers.has(provider)) {
throw new Error(`Provider '${provider}' non disponible. Providers disponibles: ${Array.from(this.providers.keys()).join(', ')}`);
}
const startTime = Date.now();
const aiProvider = this.providers.get(provider)!;
try {
const response = await aiProvider.generateResponse(request);
const responseTime = Date.now() - startTime;
// Mettre à jour les statistiques
this.updateUsageStats(provider, response, responseTime);
return response;
} catch (error) {
logger.error(`Erreur avec le provider ${provider}:`, error);
throw error;
}
}
private updateUsageStats(provider: string, response: AIResponse, responseTime: number) {
const stats = this.usageStats.get(provider);
if (!stats) return;
stats.totalRequests += 1;
stats.totalInputTokens += response.usage.inputTokens;
stats.totalOutputTokens += response.usage.outputTokens;
stats.totalCost += response.cost.totalCost;
stats.averageResponseTime = (stats.averageResponseTime * (stats.totalRequests - 1) + responseTime) / stats.totalRequests;
stats.lastUsed = new Date();
}
switchProvider(providerName: string): void {
if (!this.providers.has(providerName)) {
throw new Error(`Provider '${providerName}' non disponible. Providers disponibles: ${Array.from(this.providers.keys()).join(', ')}`);
}
this.currentProvider = providerName;
logger.info(`Provider actuel changé vers: ${providerName}`);
}
getCurrentProvider(): string {
return this.currentProvider;
}
getAvailableProviders(): string[] {
return Array.from(this.providers.keys());
}
getUsageStats(provider?: string): TokenUsageStats | TokenUsageStats[] {
if (provider) {
const stats = this.usageStats.get(provider);
if (!stats) {
throw new Error(`Pas de statistiques pour le provider '${provider}'`);
}
return stats;
}
return Array.from(this.usageStats.values());
}
getTotalCost(): number {
return Array.from(this.usageStats.values())
.reduce((total, stats) => total + stats.totalCost, 0);
}
resetStats(provider?: string): void {
if (provider) {
const stats = this.usageStats.get(provider);
if (stats) {
stats.totalRequests = 0;
stats.totalInputTokens = 0;
stats.totalOutputTokens = 0;
stats.totalCost = 0;
stats.averageResponseTime = 0;
}
} else {
this.usageStats.forEach(stats => {
stats.totalRequests = 0;
stats.totalInputTokens = 0;
stats.totalOutputTokens = 0;
stats.totalCost = 0;
stats.averageResponseTime = 0;
});
}
}
async disconnect(): Promise<void> {
logger.info('Déconnexion du client IA');
}
}

27
src/ai/index.ts Normal file
View File

@ -0,0 +1,27 @@
/**
* Module d'intégration IA principal
*/
import { AIClient } from './client.js';
import { ProviderConfig } from './types.js';
export async function initAI(config: any, auth: any): Promise<AIClient> {
const providerConfig: ProviderConfig = {
openai: {
apiKey: config.ai.openai?.apiKey || process.env.OPENAI_API_KEY || '',
model: config.ai.openai?.model || 'gpt-4o-mini',
baseUrl: config.ai.openai?.baseUrl
},
mistral: {
apiKey: config.ai.mistral?.apiKey || process.env.MISTRAL_API_KEY || '',
model: config.ai.mistral?.model || 'mistral-large-latest',
baseUrl: config.ai.mistral?.baseUrl
},
defaultProvider: config.ai.defaultProvider || 'openai'
};
return new AIClient(providerConfig);
}
export { AIClient } from './client.js';
export * from './types.js';

104
src/ai/providers/mistral.ts Normal file
View File

@ -0,0 +1,104 @@
/**
* Intégration Mistral AI
*/
import fetch from 'node-fetch';
import { AIProvider, AIRequest, AIResponse } from '../types.js';
import { logger } from '../../utils/logger.js';
export class MistralProvider {
private provider: AIProvider;
constructor(apiKey: string, model: string = 'mistral-large-latest', baseUrl?: string) {
this.provider = {
name: 'mistral',
apiKey,
baseUrl: baseUrl || 'https://api.mistral.ai/v1',
model,
tokenLimits: this.getTokenLimits(model),
pricing: this.getPricing(model)
};
}
private getTokenLimits(model: string) {
const limits = {
'mistral-large-latest': { input: 32000, output: 8192 },
'mistral-medium': { input: 32000, output: 8192 },
'mistral-small': { input: 32000, output: 8192 },
'codestral-latest': { input: 32000, output: 8192 },
'mistral-7b-instruct': { input: 32000, output: 8192 }
};
return limits[model as keyof typeof limits] || limits['mistral-large-latest'];
}
private getPricing(model: string) {
// Prix en USD par 1000 tokens (mise à jour novembre 2024)
const pricing = {
'mistral-large-latest': { inputTokenPrice: 0.004, outputTokenPrice: 0.012 },
'mistral-medium': { inputTokenPrice: 0.0027, outputTokenPrice: 0.0081 },
'mistral-small': { inputTokenPrice: 0.002, outputTokenPrice: 0.006 },
'codestral-latest': { inputTokenPrice: 0.001, outputTokenPrice: 0.003 },
'mistral-7b-instruct': { inputTokenPrice: 0.00025, outputTokenPrice: 0.00025 }
};
return pricing[model as keyof typeof pricing] || pricing['mistral-large-latest'];
}
async generateResponse(request: AIRequest): Promise<AIResponse> {
const startTime = Date.now();
try {
const response = await fetch(`${this.provider.baseUrl}/chat/completions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${this.provider.apiKey}`
},
body: JSON.stringify({
model: this.provider.model,
messages: request.messages,
max_tokens: request.maxTokens,
temperature: request.temperature ?? 0.7,
stream: false
})
});
if (!response.ok) {
const error = await response.text();
throw new Error(`Mistral API Error: ${response.status} - ${error}`);
}
const data = await response.json() as any;
const responseTime = Date.now() - startTime;
const usage = {
inputTokens: data.usage.prompt_tokens,
outputTokens: data.usage.completion_tokens,
totalTokens: data.usage.total_tokens
};
const cost = {
inputCost: (usage.inputTokens / 1000) * this.provider.pricing.inputTokenPrice,
outputCost: (usage.outputTokens / 1000) * this.provider.pricing.outputTokenPrice,
totalCost: 0
};
cost.totalCost = cost.inputCost + cost.outputCost;
logger.info(`Mistral API appelé: ${usage.totalTokens} tokens, ${cost.totalCost.toFixed(4)}$, ${responseTime}ms`);
return {
content: data.choices[0].message.content,
usage,
model: this.provider.model,
provider: 'mistral',
cost
};
} catch (error) {
logger.error('Erreur Mistral API:', error);
throw error;
}
}
getProvider(): AIProvider {
return this.provider;
}
}

104
src/ai/providers/openai.ts Normal file
View File

@ -0,0 +1,104 @@
/**
* Intégration OpenAI/ChatGPT
*/
import fetch from 'node-fetch';
import { AIProvider, AIRequest, AIResponse } from '../types.js';
import { logger } from '../../utils/logger.js';
export class OpenAIProvider {
private provider: AIProvider;
constructor(apiKey: string, model: string = 'gpt-4', baseUrl?: string) {
this.provider = {
name: 'openai',
apiKey,
baseUrl: baseUrl || 'https://api.openai.com/v1',
model,
tokenLimits: this.getTokenLimits(model),
pricing: this.getPricing(model)
};
}
private getTokenLimits(model: string) {
const limits = {
'gpt-4': { input: 8192, output: 4096 },
'gpt-4-turbo': { input: 128000, output: 4096 },
'gpt-3.5-turbo': { input: 16385, output: 4096 },
'gpt-4o': { input: 128000, output: 4096 },
'gpt-4o-mini': { input: 128000, output: 16384 }
};
return limits[model as keyof typeof limits] || limits['gpt-4'];
}
private getPricing(model: string) {
// Prix en USD par 1000 tokens (mise à jour novembre 2024)
const pricing = {
'gpt-4': { inputTokenPrice: 0.03, outputTokenPrice: 0.06 },
'gpt-4-turbo': { inputTokenPrice: 0.01, outputTokenPrice: 0.03 },
'gpt-3.5-turbo': { inputTokenPrice: 0.0015, outputTokenPrice: 0.002 },
'gpt-4o': { inputTokenPrice: 0.005, outputTokenPrice: 0.015 },
'gpt-4o-mini': { inputTokenPrice: 0.00015, outputTokenPrice: 0.0006 }
};
return pricing[model as keyof typeof pricing] || pricing['gpt-4'];
}
async generateResponse(request: AIRequest): Promise<AIResponse> {
const startTime = Date.now();
try {
const response = await fetch(`${this.provider.baseUrl}/chat/completions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${this.provider.apiKey}`
},
body: JSON.stringify({
model: this.provider.model,
messages: request.messages,
max_tokens: request.maxTokens,
temperature: request.temperature ?? 0.7,
stream: false
})
});
if (!response.ok) {
const error = await response.text();
throw new Error(`OpenAI API Error: ${response.status} - ${error}`);
}
const data = await response.json() as any;
const responseTime = Date.now() - startTime;
const usage = {
inputTokens: data.usage.prompt_tokens,
outputTokens: data.usage.completion_tokens,
totalTokens: data.usage.total_tokens
};
const cost = {
inputCost: (usage.inputTokens / 1000) * this.provider.pricing.inputTokenPrice,
outputCost: (usage.outputTokens / 1000) * this.provider.pricing.outputTokenPrice,
totalCost: 0
};
cost.totalCost = cost.inputCost + cost.outputCost;
logger.info(`OpenAI API appelé: ${usage.totalTokens} tokens, ${cost.totalCost.toFixed(4)}$, ${responseTime}ms`);
return {
content: data.choices[0].message.content,
usage,
model: this.provider.model,
provider: 'openai',
cost
};
} catch (error) {
logger.error('Erreur OpenAI API:', error);
throw error;
}
}
getProvider(): AIProvider {
return this.provider;
}
}

69
src/ai/types.ts Normal file
View File

@ -0,0 +1,69 @@
/**
* Types pour la gestion des APIs d'IA
*/
export interface AIProvider {
name: string;
apiKey: string;
baseUrl?: string;
model: string;
tokenLimits: {
input: number;
output: number;
};
pricing: {
inputTokenPrice: number; // Prix par 1000 tokens d'entrée
outputTokenPrice: number; // Prix par 1000 tokens de sortie
};
}
export interface AIRequest {
messages: Array<{
role: 'system' | 'user' | 'assistant';
content: string;
}>;
maxTokens?: number;
temperature?: number;
stream?: boolean;
}
export interface AIResponse {
content: string;
usage: {
inputTokens: number;
outputTokens: number;
totalTokens: number;
};
model: string;
provider: string;
cost: {
inputCost: number;
outputCost: number;
totalCost: number;
};
}
export interface TokenUsageStats {
provider: string;
model: string;
totalRequests: number;
totalInputTokens: number;
totalOutputTokens: number;
totalCost: number;
averageResponseTime: number;
lastUsed: Date;
}
export interface ProviderConfig {
openai: {
apiKey: string;
model: string;
baseUrl?: string;
};
mistral: {
apiKey: string;
model: string;
baseUrl?: string;
};
defaultProvider: 'openai' | 'mistral';
}

19
src/auth/index.ts Normal file
View File

@ -0,0 +1,19 @@
/**
* Gestion de l'authentification
*/
export class AuthManager {
constructor(private config: any) {}
isAuthenticated(): boolean {
return true;
}
async authenticate(): Promise<void> {
return;
}
}
export async function initAuthentication(config: any): Promise<AuthManager> {
return new AuthManager(config);
}

8
src/cli.ts Normal file
View File

@ -0,0 +1,8 @@
#!/usr/bin/env node
import { main } from './index.js';
main().catch((error) => {
console.error('Fatal error:', error);
process.exit(1);
});

19
src/codebase/index.ts Normal file
View File

@ -0,0 +1,19 @@
/**
* Analyse de codebase
*/
export class CodebaseAnalyzer {
constructor(private config: any) {}
startBackgroundAnalysis(): void {
return;
}
async stopBackgroundAnalysis(): Promise<void> {
return;
}
}
export async function initCodebaseAnalysis(config: any): Promise<CodebaseAnalyzer> {
return new CodebaseAnalyzer(config);
}

184
src/commands/index.ts Normal file
View File

@ -0,0 +1,184 @@
/**
* Processeur de commandes
*/
import * as readline from 'readline';
import { AIClient } from '../ai/client.js';
import { Terminal } from '../terminal/index.js';
import { logger } from '../utils/logger.js';
export class CommandProcessor {
private rl: readline.Interface;
constructor(
private config: any,
private dependencies: {
terminal: Terminal;
auth: any;
ai: AIClient;
codebase: any;
fileOps: any;
execution: any;
errors: any;
}
) {
this.rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
prompt: '🧠 NeuraTerm> '
});
}
async startCommandLoop(): Promise<void> {
this.rl.prompt();
this.rl.on('line', async (input) => {
const command = input.trim();
if (command === '') {
this.rl.prompt();
return;
}
try {
await this.processCommand(command);
} catch (error) {
this.dependencies.terminal.displayError(error instanceof Error ? error.message : String(error));
logger.error('Erreur lors du traitement de la commande:', error);
}
this.rl.prompt();
});
this.rl.on('close', () => {
console.log('\nAu revoir! 👋');
process.exit(0);
});
}
private async processCommand(command: string): Promise<void> {
const [cmd, ...args] = command.split(' ');
switch (cmd.toLowerCase()) {
case 'help':
this.dependencies.terminal.displayHelp();
break;
case 'exit':
case 'quit':
this.rl.close();
break;
case 'clear':
console.clear();
this.dependencies.terminal.displayWelcome();
break;
case 'providers':
this.showProviders();
break;
case 'provider':
this.switchProvider(args[0]);
break;
case 'stats':
this.showStats(args[0]);
break;
case 'cost':
this.showTotalCost();
break;
case 'reset-stats':
this.resetStats(args[0]);
break;
case 'config':
this.showConfig();
break;
default:
// Traiter comme une question à l'IA
await this.handleAIQuery(command);
break;
}
}
private showProviders(): void {
const providers = this.dependencies.ai.getAvailableProviders();
const current = this.dependencies.ai.getCurrentProvider();
console.log('\n🤖 Providers disponibles:');
providers.forEach(provider => {
const indicator = provider === current ? '→' : ' ';
console.log(` ${indicator} ${provider}${provider === current ? ' (actuel)' : ''}`);
});
}
private switchProvider(provider: string): void {
if (!provider) {
console.log('Usage: provider <nom>');
return;
}
try {
this.dependencies.ai.switchProvider(provider);
console.log(`✅ Provider changé vers: ${provider}`);
} catch (error) {
this.dependencies.terminal.displayError(error instanceof Error ? error.message : String(error));
}
}
private showStats(provider?: string): void {
try {
const stats = this.dependencies.ai.getUsageStats(provider);
this.dependencies.terminal.displayStats(stats);
} catch (error) {
this.dependencies.terminal.displayError(error instanceof Error ? error.message : String(error));
}
}
private showTotalCost(): void {
const totalCost = this.dependencies.ai.getTotalCost();
console.log(`\n💰 Coût total: $${totalCost.toFixed(4)}`);
}
private resetStats(provider?: string): void {
this.dependencies.ai.resetStats(provider);
const message = provider
? `Statistiques réinitialisées pour ${provider}`
: 'Toutes les statistiques réinitialisées';
console.log(`${message}`);
}
private showConfig(): void {
console.log('\n⚙ Configuration actuelle:');
console.log(JSON.stringify(this.config, null, 2));
}
private async handleAIQuery(query: string): Promise<void> {
try {
const response = await this.dependencies.ai.generateResponse({
messages: [
{
role: 'system',
content: 'Tu es un assistant IA professionnel. Réponds de manière concise et utile.'
},
{
role: 'user',
content: query
}
]
});
this.dependencies.terminal.displayResponse(response);
} catch (error) {
this.dependencies.terminal.displayError(error instanceof Error ? error.message : String(error));
}
}
}
export async function initCommandProcessor(config: any, dependencies: any): Promise<CommandProcessor> {
return new CommandProcessor(config, dependencies);
}

116
src/config/index.ts Normal file
View File

@ -0,0 +1,116 @@
/**
* Gestion de la configuration
*/
import { readFileSync, existsSync } from 'fs';
import { homedir } from 'os';
import { join } from 'path';
import { logger } from '../utils/logger.js';
export interface Config {
ai: {
openai?: {
apiKey: string;
model?: string;
baseUrl?: string;
};
mistral?: {
apiKey: string;
model?: string;
baseUrl?: string;
};
defaultProvider: 'openai' | 'mistral';
};
terminal: {
theme: 'dark' | 'light';
showTokenCount: boolean;
showCost: boolean;
autoSave: boolean;
};
telemetry: {
enabled: boolean;
};
}
const DEFAULT_CONFIG: Config = {
ai: {
defaultProvider: 'openai'
},
terminal: {
theme: 'dark',
showTokenCount: true,
showCost: true,
autoSave: true
},
telemetry: {
enabled: false
}
};
export async function loadConfig(options: any = {}): Promise<Config> {
let config = { ...DEFAULT_CONFIG };
// Charger depuis le fichier de configuration
const configPath = join(homedir(), '.neuraterm', 'config.json');
if (existsSync(configPath)) {
try {
const fileConfig = JSON.parse(readFileSync(configPath, 'utf8'));
config = { ...config, ...fileConfig };
logger.info('Configuration chargée depuis:', configPath);
} catch (error) {
logger.warn('Erreur lors du chargement de la configuration:', error);
}
}
// Charger depuis les variables d'environnement
if (process.env.OPENAI_API_KEY) {
config.ai.openai = {
...config.ai.openai,
apiKey: process.env.OPENAI_API_KEY
};
}
if (process.env.MISTRAL_API_KEY) {
config.ai.mistral = {
...config.ai.mistral,
apiKey: process.env.MISTRAL_API_KEY
};
}
// Fusionner avec les options passées en paramètre
config = { ...config, ...options };
// Validation
validateConfig(config);
return config;
}
function validateConfig(config: Config): void {
// Vérifier qu'au moins un provider est configuré
const hasOpenAI = config.ai.openai?.apiKey;
const hasMistral = config.ai.mistral?.apiKey;
if (!hasOpenAI && !hasMistral) {
throw new Error('Aucune clé API configurée. Veuillez configurer OPENAI_API_KEY ou MISTRAL_API_KEY.');
}
// Vérifier que le provider par défaut est disponible
if (config.ai.defaultProvider === 'openai' && !hasOpenAI) {
if (hasMistral) {
config.ai.defaultProvider = 'mistral';
logger.warn('Provider par défaut changé vers Mistral (OpenAI non configuré)');
} else {
throw new Error('Provider par défaut OpenAI configuré mais aucune clé API fournie');
}
}
if (config.ai.defaultProvider === 'mistral' && !hasMistral) {
if (hasOpenAI) {
config.ai.defaultProvider = 'openai';
logger.warn('Provider par défaut changé vers OpenAI (Mistral non configuré)');
} else {
throw new Error('Provider par défaut Mistral configuré mais aucune clé API fournie');
}
}
}

24
src/errors/index.ts Normal file
View File

@ -0,0 +1,24 @@
/**
* Gestion des erreurs
*/
import { logger } from '../utils/logger.js';
export class ErrorHandler {
handleFatalError(error: any): never {
logger.error('Erreur fatale:', error);
process.exit(1);
}
handleUnhandledRejection(reason: any, promise: Promise<any>): void {
logger.error('Promise rejetée non gérée:', reason);
}
handleUncaughtException(error: Error): void {
logger.error('Exception non catchée:', error);
}
}
export function initErrorHandling(): ErrorHandler {
return new ErrorHandler();
}

11
src/execution/index.ts Normal file
View File

@ -0,0 +1,11 @@
/**
* Environnement d'exécution
*/
export class ExecutionEnvironment {
constructor(private config: any) {}
}
export async function initExecutionEnvironment(config: any): Promise<ExecutionEnvironment> {
return new ExecutionEnvironment(config);
}

11
src/fileops/index.ts Normal file
View File

@ -0,0 +1,11 @@
/**
* Opérations sur les fichiers
*/
export class FileOperations {
constructor(private config: any) {}
}
export async function initFileOperations(config: any): Promise<FileOperations> {
return new FileOperations(config);
}

157
src/index.ts Normal file
View File

@ -0,0 +1,157 @@
/**
* NeuraTerm - Terminal IA Professionnel
*
* Point d'entrée principal de l'application.
* Gère le cycle de vie de l'application et l'initialisation des modules.
*/
import { loadConfig } from './config/index.js';
import { initTerminal } from './terminal/index.js';
import { initAuthentication } from './auth/index.js';
import { initAI } from './ai/index.js';
import { initCodebaseAnalysis } from './codebase/index.js';
import { initCommandProcessor } from './commands/index.js';
import { initFileOperations } from './fileops/index.js';
import { initExecutionEnvironment } from './execution/index.js';
import { initErrorHandling } from './errors/index.js';
import { initTelemetry } from './telemetry/index.js';
import { logger } from './utils/logger.js';
/**
* Instance de l'application contenant toutes les références aux sous-systèmes
*/
export interface AppInstance {
config: any;
terminal: any;
auth: any;
ai: any;
codebase: any;
commands: any;
fileOps: any;
execution: any;
errors: any;
telemetry: any;
}
/**
* Initialise tous les sous-systèmes de l'application
*/
export async function initialize(options: any = {}): Promise<AppInstance> {
const errors = initErrorHandling();
try {
logger.info('Démarrage de NeuraTerm...');
const config = await loadConfig(options);
const terminal = await initTerminal(config);
const auth = await initAuthentication(config);
const ai = await initAI(config, auth);
const codebase = await initCodebaseAnalysis(config);
const fileOps = await initFileOperations(config);
const execution = await initExecutionEnvironment(config);
const commands = await initCommandProcessor(config, {
terminal,
auth,
ai,
codebase,
fileOps,
execution,
errors
});
const telemetry = config.telemetry.enabled
? await initTelemetry(config)
: null;
logger.info('NeuraTerm initialisé avec succès');
return {
config,
terminal,
auth,
ai,
codebase,
commands,
fileOps,
execution,
errors,
telemetry
};
} catch (error) {
errors.handleFatalError(error);
throw error;
}
}
/**
* Lance la boucle principale de l'application
*/
export async function run(app: AppInstance): Promise<void> {
try {
app.terminal.displayWelcome();
if (!app.auth.isAuthenticated()) {
await app.auth.authenticate();
}
app.codebase.startBackgroundAnalysis();
await app.commands.startCommandLoop();
await shutdown(app);
} catch (error) {
app.errors.handleFatalError(error);
}
}
/**
* Arrêt gracieux de l'application
*/
export async function shutdown(app: AppInstance): Promise<void> {
logger.info('Arrêt de NeuraTerm...');
await app.codebase.stopBackgroundAnalysis();
if (app.telemetry) {
await app.telemetry.submitTelemetry();
}
await app.ai.disconnect();
logger.info('NeuraTerm arrêté');
}
/**
* Gestion des signaux de processus pour un arrêt propre
*/
function setupProcessHandlers(app: AppInstance): void {
process.on('SIGINT', async () => {
logger.info('Signal SIGINT reçu');
await shutdown(app);
process.exit(0);
});
process.on('SIGTERM', async () => {
logger.info('Signal SIGTERM reçu');
await shutdown(app);
process.exit(0);
});
process.on('unhandledRejection', (reason, promise) => {
logger.error('Promise rejetée non gérée:', reason);
app.errors.handleUnhandledRejection(reason, promise);
});
process.on('uncaughtException', (error) => {
logger.error('Exception non catchée:', error);
app.errors.handleUncaughtException(error);
process.exit(1);
});
}
/**
* Point d'entrée principal
*/
export async function main(options: any = {}): Promise<void> {
const app = await initialize(options);
setupProcessHandlers(app);
await run(app);
}

15
src/telemetry/index.ts Normal file
View File

@ -0,0 +1,15 @@
/**
* Télémétrie
*/
export class Telemetry {
constructor(private config: any) {}
async submitTelemetry(): Promise<void> {
return;
}
}
export async function initTelemetry(config: any): Promise<Telemetry> {
return new Telemetry(config);
}

96
src/terminal/index.ts Normal file
View File

@ -0,0 +1,96 @@
/**
* Interface terminal
*/
import { logger } from '../utils/logger.js';
export class Terminal {
constructor(private config: any) {}
displayWelcome(): void {
console.log(`
🧠 NeuraTerm
Terminal IA Professionnel
Version: 1.0.0
Tapez 'help' pour voir les commandes disponibles.
Tapez 'exit' pour quitter.
`);
}
displayStats(stats: any): void {
console.log('\n📊 Statistiques d\'utilisation:');
console.log('─'.repeat(40));
if (Array.isArray(stats)) {
stats.forEach(stat => {
console.log(`${stat.provider.toUpperCase()} (${stat.model}):`);
console.log(` Requêtes: ${stat.totalRequests}`);
console.log(` Tokens: ${stat.totalInputTokens + stat.totalOutputTokens} (${stat.totalInputTokens}${stat.totalOutputTokens})`);
console.log(` Coût: $${stat.totalCost.toFixed(4)}`);
console.log(` Temps moyen: ${Math.round(stat.averageResponseTime)}ms`);
console.log('');
});
} else {
console.log(`${stats.provider.toUpperCase()} (${stats.model}):`);
console.log(` Requêtes: ${stats.totalRequests}`);
console.log(` Tokens: ${stats.totalInputTokens + stats.totalOutputTokens} (${stats.totalInputTokens}${stats.totalOutputTokens})`);
console.log(` Coût: $${stats.totalCost.toFixed(4)}`);
console.log(` Temps moyen: ${Math.round(stats.averageResponseTime)}ms`);
}
}
displayResponse(response: any): void {
console.log('\n' + response.content);
if (this.config.terminal.showTokenCount || this.config.terminal.showCost) {
console.log('\n' + '─'.repeat(40));
if (this.config.terminal.showTokenCount) {
console.log(`Tokens: ${response.usage.totalTokens} (${response.usage.inputTokens}${response.usage.outputTokens})`);
}
if (this.config.terminal.showCost) {
console.log(`Coût: $${response.cost.totalCost.toFixed(4)} | Provider: ${response.provider} (${response.model})`);
}
}
}
displayError(error: string): void {
console.error(`\n❌ Erreur: ${error}`);
}
displayHelp(): void {
console.log(`
🔧 Commandes disponibles:
Commandes générales:
help - Afficher cette aide
exit, quit - Quitter NeuraTerm
clear - Effacer l'écran
Gestion des providers:
providers - Lister les providers disponibles
provider <nom> - Changer de provider (openai, mistral)
models - Lister les modèles disponibles
Statistiques:
stats - Afficher les statistiques d'utilisation
stats <provider> - Statistiques d'un provider spécifique
cost - Afficher le coût total
reset-stats - Réinitialiser les statistiques
Configuration:
config - Afficher la configuration actuelle
set <param> <value> - Modifier un paramètre
Pour poser une question à l'IA, tapez simplement votre message.
`);
}
}
export async function initTerminal(config: any): Promise<Terminal> {
return new Terminal(config);
}

44
src/utils/logger.ts Normal file
View File

@ -0,0 +1,44 @@
/**
* Système de logging
*/
export enum LogLevel {
DEBUG = 0,
INFO = 1,
WARN = 2,
ERROR = 3
}
class Logger {
private level: LogLevel = LogLevel.INFO;
setLevel(level: LogLevel): void {
this.level = level;
}
debug(message: string, ...args: any[]): void {
if (this.level <= LogLevel.DEBUG) {
console.debug(`[DEBUG] ${new Date().toISOString()} - ${message}`, ...args);
}
}
info(message: string, ...args: any[]): void {
if (this.level <= LogLevel.INFO) {
console.log(`[INFO] ${new Date().toISOString()} - ${message}`, ...args);
}
}
warn(message: string, ...args: any[]): void {
if (this.level <= LogLevel.WARN) {
console.warn(`[WARN] ${new Date().toISOString()} - ${message}`, ...args);
}
}
error(message: string, ...args: any[]): void {
if (this.level <= LogLevel.ERROR) {
console.error(`[ERROR] ${new Date().toISOString()} - ${message}`, ...args);
}
}
}
export const logger = new Logger();

29
tsconfig.json Normal file
View File

@ -0,0 +1,29 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "node",
"allowSyntheticDefaultImports": true,
"esModuleInterop": true,
"allowJs": true,
"sourceMap": true,
"outDir": "./dist",
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true
},
"include": [
"src/**/*"
],
"exclude": [
"node_modules",
"dist",
".divers"
]
}