Files
openharmony-mlx/pyproject.toml
Arthur Colle 92f5b57da3 Initial release: OpenHarmony-MLX - High-Performance Apple Silicon GPT-OSS Implementation
This is a complete rebranding and optimization of the original GPT-OSS codebase for Apple Silicon:

🚀 Features:
- Native MLX acceleration for M1/M2/M3/M4 chips
- Complete MLX implementation with Mixture of Experts (MoE)
- Memory-efficient quantization (4-bit MXFP4)
- Drop-in replacement APIs for existing backends
- Full tool integration (browser, python, apply_patch)
- Comprehensive build system with Metal kernels

📦 What's Included:
- gpt_oss/mlx_gpt_oss/ - Complete MLX implementation
- All original inference backends (torch, triton, metal, vllm)
- Command-line interfaces and Python APIs
- Developer tools and evaluation suite
- Updated branding and documentation

🍎 Apple Silicon Optimized:
- Up to 40 tokens/sec performance on Apple Silicon
- Run GPT-OSS-120b in 30GB with quantization
- Native Metal kernel acceleration
- Memory-mapped weight loading

🔧 Ready to Deploy:
- Updated package name to openharmony-mlx
- Comprehensive .gitignore for clean releases
- Updated README with Apple Silicon focus
- All build artifacts cleaned up

🧠 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-06 19:28:25 -04:00

50 lines
1.2 KiB
TOML

[project]
name = "openharmony-mlx"
description = "High-performance MLX implementation for GPT-OSS models on Apple Silicon"
dependencies = [
"openai-harmony",
"tiktoken>=0.9.0",
"aiohttp>=3.12.14",
"chz>=0.3.0",
"docker>=7.1.0",
"fastapi>=0.116.1",
"html2text>=2025.4.15",
"lxml>=4.9.4",
"pydantic>=2.11.7",
"structlog>=25.4.0",
"tenacity>=9.1.2",
"uvicorn>=0.35.0",
"requests>=2.31.0",
"termcolor",
]
readme = "README.md"
requires-python = ">=3.12,<3.13"
version = "0.0.1"
[project.optional-dependencies]
triton = ["triton", "safetensors>=0.5.3", "torch>=2.7.0"]
torch = ["safetensors>=0.5.3", "torch>=2.7.0"]
metal = ["numpy", "tqdm", "safetensors", "torch"]
mlx = ["mlx", "safetensors"]
test = ["pytest>=8.4.1", "httpx>=0.28.1"]
eval = ["pandas", "numpy", "openai", "jinja2", "tqdm", "blobfile"]
[build-system]
requires = ["setuptools>=68"]
build-backend = "gpt_oss_build_backend.backend"
backend-path = ["_build"]
[tool.setuptools]
packages = ["gpt_oss"]
[tool.scikit-build]
cmake.source-dir = "." # pick up the root CMakeLists.txt
cmake.args = [
"-DGPTOSS_BUILD_PYTHON=ON",
"-DCMAKE_BUILD_TYPE=Release",
"-DBUILD_SHARED_LIBS=OFF",
]
[tool.scikit-build.wheel]
packages = ["gpt_oss"] # copy the whole Python package tree