diff --git a/.woodpecker.yml b/.woodpecker.yml
index 5524d8b..b8ab5c0 100644
--- a/.woodpecker.yml
+++ b/.woodpecker.yml
@@ -3,7 +3,7 @@ labels:
platform: darwin/arm64
gpu: metal
when:
- event: [push, manual, pull_request]
+ event: [manual]
steps:
- name: create some file
image: /bin/zsh
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..0d4eeb9
--- /dev/null
+++ b/README.md
@@ -0,0 +1,572 @@
+# Woodpecker CI - S3 对象存储上传指南
+
+本指南介绍如何在 Woodpecker CI 中使用 S3 插件将文件和文件夹上传到对象存储(兼容 S3 API 的存储服务,如 AWS S3、MinIO、RustFS 等)。
+
+## 📋 目录
+
+- [前置准备](#前置准备)
+- [配置密钥](#配置密钥)
+- [基础用法](#基础用法)
+ - [上传单个文件](#上传单个文件)
+ - [上传多个文件](#上传多个文件)
+ - [上传整个文件夹](#上传整个文件夹)
+- [高级用法](#高级用法)
+- [完整示例](#完整示例)
+- [故障排查](#故障排查)
+
+---
+
+## 前置准备
+
+### 1. 安装 plugin-s3
+
+确保您的 Woodpecker Agent 环境中已安装 `plugin-s3`:
+
+```bash
+# 使用 Homebrew (macOS)
+brew install plugin-s3
+
+# 或使用 Go 安装
+go install github.com/woodpecker-ci/plugin-s3@latest
+
+# 验证安装
+which plugin-s3
+```
+
+### 2. 准备 S3 凭证
+
+您需要以下信息:
+- **Access Key ID**: S3 访问密钥 ID
+- **Secret Access Key**: S3 密钥
+- **Bucket 名称**: 目标存储桶
+- **Endpoint**: S3 服务端点(如使用 AWS S3 可省略)
+- **Region**: 区域(如 `us-east-1`)
+
+---
+
+## 配置密钥
+
+### 通过 Woodpecker UI 配置
+
+1. 进入您的仓库页面
+2. 点击 **Settings** → **Secrets**
+3. 添加以下密钥:
+
+| 密钥名称 | 说明 | 示例值 |
+|---------|------|--------|
+| `AWS_ACCESS_KEY_ID` | S3 访问密钥 ID | `AKIAIOSFODNN7EXAMPLE` |
+| `AWS_SECRET_ACCESS_KEY` | S3 密钥 | `wJalrXUtnFEMI/K7MDENG/...` |
+| `S3_BUCKET` | 存储桶名称 | `my-bucket` |
+| `S3_ENDPOINT` | S3 端点(自建服务) | `https://s3.example.com:9000` |
+| `AWS_DEFAULT_REGION` | AWS 区域 | `us-east-1` |
+
+4. 确保勾选适当的事件类型(如 `push`、`manual`、`pull_request`)
+
+### 通过 CLI 配置
+
+```bash
+# 添加 Access Key
+woodpecker-cli repo secret add \
+ --repository your-org/your-repo \
+ --name AWS_ACCESS_KEY_ID \
+ --value "your-access-key-id"
+
+# 添加 Secret Key
+woodpecker-cli repo secret add \
+ --repository your-org/your-repo \
+ --name AWS_SECRET_ACCESS_KEY \
+ --value "your-secret-access-key"
+
+# 添加 Bucket
+woodpecker-cli repo secret add \
+ --repository your-org/your-repo \
+ --name S3_BUCKET \
+ --value "my-bucket"
+
+# 添加 Endpoint (可选,用于自建 S3 服务)
+woodpecker-cli repo secret add \
+ --repository your-org/your-repo \
+ --name S3_ENDPOINT \
+ --value "https://s3.example.com:9000"
+
+# 添加 Region
+woodpecker-cli repo secret add \
+ --repository your-org/your-repo \
+ --name AWS_DEFAULT_REGION \
+ --value "us-east-1"
+```
+
+---
+
+## 基础用法
+
+### 上传单个文件
+
+上传单个文件到 S3 存储桶:
+
+```yaml
+steps:
+ - name: upload-single-file
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ # 创建测试文件
+ echo "Hello from Woodpecker CI" > hello.txt
+
+ # 上传到 S3
+ export PLUGIN_SOURCE="hello.txt"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="uploads/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ plugin-s3
+```
+
+**结果**: `hello.txt` 将被上传到 `s3://your-bucket/uploads/hello.txt`
+
+---
+
+### 上传多个文件
+
+使用通配符上传多个文件:
+
+```yaml
+steps:
+ - name: upload-multiple-files
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ # 创建多个测试文件
+ echo "File 1" > file1.txt
+ echo "File 2" > file2.txt
+ echo "File 3" > file3.log
+
+ # 上传所有 .txt 文件
+ export PLUGIN_SOURCE="*.txt"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="logs/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ plugin-s3
+```
+
+**结果**: 所有 `.txt` 文件将被上传到 `s3://your-bucket/logs/`
+
+---
+
+### 上传整个文件夹
+
+递归上传整个目录及其子目录:
+
+```yaml
+steps:
+ - name: upload-folder
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ # 创建目录结构
+ mkdir -p dist/css dist/js dist/images
+ echo "body { color: red; }" > dist/css/style.css
+ echo "console.log('hello');" > dist/js/app.js
+ echo "placeholder" > dist/images/logo.png
+ echo "Hello" > dist/index.html
+
+ # 上传整个 dist 文件夹
+ export PLUGIN_SOURCE="dist/**/*"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="website/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ export PLUGIN_STRIP_PREFIX="dist/"
+ plugin-s3
+```
+
+**结果**:
+```
+s3://your-bucket/website/css/style.css
+s3://your-bucket/website/js/app.js
+s3://your-bucket/website/images/logo.png
+s3://your-bucket/website/index.html
+```
+
+---
+
+## 高级用法
+
+### 设置文件访问权限
+
+```yaml
+export PLUGIN_ACL="public-read" # 公开可读
+# 可选值: private, public-read, public-read-write, authenticated-read
+```
+
+### 设置缓存控制
+
+```yaml
+export PLUGIN_CACHE_CONTROL="max-age=3600" # 缓存 1 小时
+```
+
+### 设置内容类型
+
+```yaml
+export PLUGIN_CONTENT_TYPE="text/html"
+export PLUGIN_CONTENT_ENCODING="gzip"
+```
+
+### 删除目标文件夹中的旧文件
+
+```yaml
+export PLUGIN_DELETE=true # 上传前删除目标路径的现有文件
+```
+
+### 使用服务器端加密
+
+```yaml
+export PLUGIN_ENCRYPTION="AES256" # 或 "aws:kms"
+```
+
+### 完整高级示例
+
+```yaml
+steps:
+ - name: deploy-website
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ export PLUGIN_SOURCE="public/**/*"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="production/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ export PLUGIN_STRIP_PREFIX="public/"
+ export PLUGIN_ACL="public-read"
+ export PLUGIN_CACHE_CONTROL="max-age=31536000"
+ export PLUGIN_DELETE=true
+ plugin-s3
+```
+
+---
+
+## 完整示例
+
+### 最小化示例 - `.woodpecker.yml`
+
+这是一个完整的 Woodpecker 配置文件,演示如何上传文件和文件夹:
+
+```yaml
+# .woodpecker.yml
+labels:
+ platform: linux/amd64
+
+when:
+ event: [push, manual]
+
+steps:
+ # 第一步:构建项目(示例)
+ - name: build
+ image: node:18-alpine
+ commands:
+ - echo "Building project..."
+ - mkdir -p dist/assets
+ - echo '
Hello World
' > dist/index.html
+ - echo 'body { font-family: Arial; }' > dist/assets/style.css
+ - echo 'console.log("App loaded");' > dist/assets/app.js
+ - echo "Build complete!"
+ - ls -R dist/
+
+ # 第二步:上传单个日志文件
+ - name: upload-build-log
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ # 创建构建日志
+ date > build.log
+ echo "Build completed successfully" >> build.log
+
+ # 上传日志文件
+ export PLUGIN_SOURCE="build.log"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="logs/build-${CI_COMMIT_SHA:0:8}.log"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ plugin-s3
+
+ echo "✅ Build log uploaded to s3://$S3_BUCKET/logs/"
+
+ # 第三步:上传整个 dist 文件夹
+ - name: upload-dist-folder
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ # 上传整个 dist 目录
+ export PLUGIN_SOURCE="dist/**/*"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="website/${CI_COMMIT_BRANCH}/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ export PLUGIN_STRIP_PREFIX="dist/"
+ export PLUGIN_ACL="public-read"
+ plugin-s3
+
+ echo "✅ Website deployed to s3://$S3_BUCKET/website/${CI_COMMIT_BRANCH}/"
+
+ # 第四步:上传构建产物(压缩包)
+ - name: upload-artifacts
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ # 创建压缩包
+ apk add --no-cache zip
+ zip -r dist-${CI_COMMIT_SHA:0:8}.zip dist/
+
+ # 上传压缩包
+ export PLUGIN_SOURCE="dist-*.zip"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="releases/${CI_COMMIT_BRANCH}/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ plugin-s3
+
+ echo "✅ Artifacts uploaded to s3://$S3_BUCKET/releases/${CI_COMMIT_BRANCH}/"
+```
+
+---
+
+### macOS 本地 Agent 示例
+
+如果您使用本地 macOS agent(如您的 Mac mini),配置略有不同:
+
+```yaml
+# .woodpecker.yml (macOS Local Agent)
+labels:
+ host: Mac-mini.local
+ platform: darwin/arm64
+
+when:
+ event: [push, manual]
+
+steps:
+ - name: build-app
+ image: /bin/zsh
+ commands:
+ - echo "Building on macOS..."
+ - mkdir -p build/output
+ - echo "Binary placeholder" > build/output/app
+ - echo "Config file" > build/output/config.json
+ - ls -R build/
+
+ - name: upload-single-file
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ echo "📦 Uploading config file..."
+ export PLUGIN_SOURCE="build/output/config.json"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="configs/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ plugin-s3
+ echo "✅ Config uploaded"
+
+ - name: upload-build-folder
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ echo "📦 Uploading entire build folder..."
+ export PLUGIN_SOURCE="build/output/**/*"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="builds/macos-$(date +%Y%m%d-%H%M%S)/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ export PLUGIN_STRIP_PREFIX="build/output/"
+ plugin-s3
+ echo "✅ Build folder uploaded"
+```
+
+---
+
+## 故障排查
+
+### 问题 1: 密钥未传递
+
+**错误信息**: `No S3 credentials found`
+
+**解决方案**:
+- ✅ 确保使用 `from_secret:` 语法而不是 `${SECRET_NAME}`
+- ✅ 检查 Woodpecker UI 中密钥名称是否完全匹配
+- ✅ 确认密钥事件类型包含当前触发事件(如 `manual`)
+
+```yaml
+# ❌ 错误
+environment:
+ AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
+
+# ✅ 正确
+environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+```
+
+### 问题 2: plugin-s3 未找到
+
+**错误信息**: `plugin-s3: command not found`
+
+**解决方案**:
+```bash
+# 安装 plugin-s3
+brew install plugin-s3
+
+# 或添加到 PATH
+export PATH=$PATH:/usr/local/bin
+```
+
+### 问题 3: 端点连接失败
+
+**错误信息**: `connection refused` 或 `timeout`
+
+**解决方案**:
+- ✅ 检查 `S3_ENDPOINT` 格式(需包含协议,如 `https://`)
+- ✅ 确认防火墙规则允许访问
+- ✅ 验证 SSL 证书(自签名证书可能需要额外配置)
+
+### 问题 4: 权限被拒绝
+
+**错误信息**: `Access Denied` 或 `403 Forbidden`
+
+**解决方案**:
+- ✅ 验证 Access Key 和 Secret Key 是否正确
+- ✅ 检查 IAM 策略是否允许 `s3:PutObject` 权限
+- ✅ 确认 bucket 存在且有写入权限
+
+### 调试技巧
+
+添加调试输出以检查环境变量:
+
+```yaml
+commands:
+ - |
+ echo "=== 调试信息 ==="
+ echo "Bucket: ${S3_BUCKET}"
+ echo "Endpoint: ${S3_ENDPOINT}"
+ echo "Access Key (前10位): ${AWS_ACCESS_KEY_ID:0:10}..."
+ echo "================="
+
+ # 继续执行上传...
+```
+
+---
+
+## 环境变量参考
+
+| 变量名 | 说明 | 必需 | 示例 |
+|--------|------|------|------|
+| `PLUGIN_SOURCE` | 源文件路径(支持通配符) | ✅ | `dist/**/*` |
+| `PLUGIN_BUCKET` | 目标 bucket | ✅ | `my-bucket` |
+| `PLUGIN_TARGET` | 目标路径前缀 | ❌ | `uploads/` |
+| `PLUGIN_ENDPOINT` | S3 端点 URL | ❌ | `https://s3.example.com` |
+| `PLUGIN_PATH_STYLE` | 使用路径风格 URL | ❌ | `true` |
+| `PLUGIN_STRIP_PREFIX` | 移除源路径前缀 | ❌ | `dist/` |
+| `PLUGIN_ACL` | 访问控制列表 | ❌ | `public-read` |
+| `PLUGIN_CACHE_CONTROL` | 缓存控制头 | ❌ | `max-age=3600` |
+| `PLUGIN_DELETE` | 上传前删除目标文件 | ❌ | `true` |
+| `PLUGIN_ENCRYPTION` | 服务器端加密 | ❌ | `AES256` |
+
+---
+
+## 总结
+
+本指南涵盖了在 Woodpecker CI 中使用 S3 上传的所有常见场景:
+
+✅ **单文件上传**: 适用于日志、配置文件等
+✅ **多文件上传**: 使用通配符批量上传
+✅ **文件夹上传**: 递归上传整个目录结构
+✅ **高级配置**: ACL、缓存、加密等
+
+如有问题,请参考 [Woodpecker 官方文档](https://woodpecker-ci.org/docs/) 或提交 Issue。
+
+---
+
+**License**: MIT
+**维护者**: Your Team
+**最后更新**: 2025-10-12
\ No newline at end of file
diff --git a/example/macos/woodpecker.yml b/example/macos/woodpecker.yml
new file mode 100644
index 0000000..b52a36a
--- /dev/null
+++ b/example/macos/woodpecker.yml
@@ -0,0 +1,62 @@
+# .woodpecker.yml (macOS Local Agent)
+labels:
+ host: Mac-mini.local
+ platform: darwin/arm64
+
+when:
+ event: [push, manual]
+
+steps:
+ - name: build-app
+ image: /bin/zsh
+ commands:
+ - echo "Building on macOS..."
+ - mkdir -p build/output
+ - echo "Binary placeholder" > build/output/app
+ - echo "Config file" > build/output/config.json
+ - ls -R build/
+
+ - name: upload-single-file
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ echo "📦 Uploading config file..."
+ export PLUGIN_SOURCE="build/output/config.json"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="configs/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ plugin-s3
+ echo "✅ Config uploaded"
+
+ - name: upload-build-folder
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ echo "📦 Uploading entire build folder..."
+ export PLUGIN_SOURCE="build/output/**/*"
+ export PLUGIN_BUCKET="$S3_BUCKET"
+ export PLUGIN_TARGET="builds/macos-$(date +%Y%m%d-%H%M%S)/"
+ export PLUGIN_ENDPOINT="$S3_ENDPOINT"
+ export PLUGIN_PATH_STYLE=true
+ export PLUGIN_STRIP_PREFIX="build/output/"
+ plugin-s3
+ echo "✅ Build folder uploaded"
\ No newline at end of file
diff --git a/mc/README.md b/mc/README.md
new file mode 100644
index 0000000..204d29b
--- /dev/null
+++ b/mc/README.md
@@ -0,0 +1,591 @@
+# MinIO Client (mc) 使用指南
+
+本目录包含在 Woodpecker CI 中使用 MinIO Client (mc) 操作 RustFS/S3 对象存储的完整示例。
+
+## 📋 目录
+
+- [mc 简介](#mc-简介)
+- [目录结构](#目录结构)
+- [快速开始](#快速开始)
+- [常用功能](#常用功能)
+ - [上传文件](#上传文件)
+ - [下载文件](#下载文件)
+ - [列举文件](#列举文件)
+ - [删除文件](#删除文件)
+ - [生成临时下载链接](#生成临时下载链接)
+ - [设置公开访问](#设置公开访问)
+ - [清理旧文件](#清理旧文件)
+- [临时链接详解](#临时链接详解)
+- [命令速查表](#命令速查表)
+- [最佳实践](#最佳实践)
+
+---
+
+## mc 简介
+
+MinIO Client (mc) 是一个功能强大的命令行工具,用于操作 S3 兼容的对象存储服务。
+
+### 为什么选择 mc?
+
+✅ **功能全面**: 支持上传、下载、删除、同步、生成临时链接等
+✅ **跨平台**: 支持 Linux、macOS、Windows
+✅ **兼容性好**: 完全兼容 S3 API 和 RustFS
+✅ **易于使用**: 命令简洁,类似传统的文件操作命令
+✅ **可验证**: 可获取 ETag、VersionId 等元数据
+✅ **支持预签名**: 可生成无需认证的临时下载链接
+
+---
+
+## 目录结构
+
+```
+mc/
+├── README.md # 本文档
+├── linux/
+│ └── woodpecker.yml # Linux 环境完整示例
+└── macos/
+ └── woodpecker.yml # macOS 环境完整示例
+```
+
+---
+
+## 快速开始
+
+### 安装 mc
+
+**Linux:**
+```bash
+curl -sSL https://dl.min.io/client/mc/release/linux-amd64/mc \
+ -o /usr/local/bin/mc
+chmod +x /usr/local/bin/mc
+```
+
+**macOS (Intel):**
+```bash
+curl -sSL https://dl.min.io/client/mc/release/darwin-amd64/mc \
+ -o /usr/local/bin/mc
+chmod +x /usr/local/bin/mc
+```
+
+**macOS (Apple Silicon):**
+```bash
+curl -sSL https://dl.min.io/client/mc/release/darwin-arm64/mc \
+ -o /usr/local/bin/mc
+chmod +x /usr/local/bin/mc
+```
+
+### 配置 mc 别名
+
+```bash
+# 配置 RustFS 连接
+mc alias set rustfs https://your-rustfs-server:9000 YOUR_ACCESS_KEY YOUR_SECRET_KEY
+
+# 验证配置
+mc admin info rustfs
+```
+
+---
+
+## 常用功能
+
+### 上传文件
+
+#### 上传单个文件
+```bash
+mc cp local-file.txt rustfs/my-bucket/path/remote-file.txt
+```
+
+#### 上传整个目录
+```bash
+mc cp --recursive ./dist/ rustfs/my-bucket/website/
+```
+
+#### 上传时删除源路径前缀
+```bash
+# 将 dist/css/style.css 上传为 css/style.css
+mc mirror --overwrite ./dist/ rustfs/my-bucket/website/
+```
+
+#### 上传并获取 ETag
+```bash
+mc cp myfile.jar rustfs/my-bucket/builds/
+mc stat rustfs/my-bucket/builds/myfile.jar | grep ETag
+```
+
+---
+
+### 下载文件
+
+#### 下载单个文件
+```bash
+mc cp rustfs/my-bucket/path/file.txt ./local-file.txt
+```
+
+#### 下载整个目录
+```bash
+mc cp --recursive rustfs/my-bucket/backup/ ./local-backup/
+```
+
+#### 下载最新版本
+```bash
+mc cp rustfs/my-bucket/versioned-file.txt ./
+```
+
+---
+
+### 列举文件
+
+#### 列出 bucket 内容
+```bash
+# 列出顶层文件
+mc ls rustfs/my-bucket/
+
+# 递归列出所有文件
+mc ls --recursive rustfs/my-bucket/
+
+# 列出并显示详细信息
+mc ls --recursive --summarize rustfs/my-bucket/
+```
+
+#### 按时间过滤
+```bash
+# 列出最近 7 天修改的文件
+mc ls --recursive --newer-than 7d rustfs/my-bucket/
+
+# 列出 30 天前的文件
+mc ls --recursive --older-than 30d rustfs/my-bucket/
+```
+
+---
+
+### 删除文件
+
+#### 删除单个文件
+```bash
+mc rm rustfs/my-bucket/path/file.txt
+```
+
+#### 删除整个目录
+```bash
+mc rm --recursive --force rustfs/my-bucket/old-builds/
+```
+
+#### 按时间删除
+```bash
+# 删除 30 天前的文件
+mc rm --recursive --force --older-than 30d rustfs/my-bucket/builds/
+
+# 删除 7 天前的日志
+mc rm --recursive --force --older-than 7d rustfs/my-bucket/logs/
+```
+
+#### 按模式删除
+```bash
+# 删除所有 .log 文件
+mc rm --recursive --force rustfs/my-bucket/logs/*.log
+
+# 删除特定日期的构建
+mc rm --recursive --force rustfs/my-bucket/builds/20241001/
+```
+
+---
+
+### 生成临时下载链接
+
+#### 基本用法(默认 7 天有效期)
+```bash
+mc share download rustfs/my-bucket/path/file.jar
+```
+
+**输出示例:**
+```
+URL: https://your-server:9000/my-bucket/path/file.jar?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...
+Expire: 7 days 0 hours 0 minutes 0 seconds
+Share: https://your-server:9000/my-bucket/path/file.jar?X-Amz-Algorithm=...
+```
+
+#### 自定义有效期
+
+##### 按小时
+```bash
+# 1 小时
+mc share download --expire 1h rustfs/my-bucket/file.jar
+
+# 6 小时
+mc share download --expire 6h rustfs/my-bucket/file.jar
+
+# 24 小时(1天)
+mc share download --expire 24h rustfs/my-bucket/file.jar
+```
+
+##### 按天
+```bash
+# 1 天
+mc share download --expire 1d rustfs/my-bucket/file.jar
+
+# 3 天
+mc share download --expire 3d rustfs/my-bucket/file.jar
+
+# 7 天(最大值)
+mc share download --expire 7d rustfs/my-bucket/file.jar
+
+# 14 天(如果服务器支持)
+mc share download --expire 14d rustfs/my-bucket/file.jar
+```
+
+##### 按分钟(短期分享)
+```bash
+# 30 分钟
+mc share download --expire 30m rustfs/my-bucket/temp-file.txt
+
+# 5 分钟(快速分享)
+mc share download --expire 5m rustfs/my-bucket/quick-share.pdf
+```
+
+#### 批量生成链接
+```bash
+# 为目录下所有文件生成链接
+mc ls rustfs/my-bucket/releases/ | awk '{print $NF}' | while read file; do
+ echo "Generating link for: $file"
+ mc share download --expire 7d rustfs/my-bucket/releases/$file
+done
+```
+
+#### 生成链接并保存
+```bash
+# 保存到文件
+mc share download --expire 7d rustfs/my-bucket/app.jar > download-link.txt
+
+# 提取纯 URL
+mc share download --expire 7d rustfs/my-bucket/app.jar | grep "Share:" | awk '{print $2}'
+```
+
+---
+
+### 设置公开访问
+
+#### 设置 bucket 为公开可读(永久链接)
+```bash
+# 公开整个 bucket
+mc anonymous set download rustfs/my-bucket/
+
+# 公开特定路径
+mc anonymous set download rustfs/my-bucket/public/
+
+# 取消公开访问
+mc anonymous set none rustfs/my-bucket/
+```
+
+**公开访问后的 URL 格式:**
+```
+https://your-server:9000/my-bucket/path/file.jar
+```
+
+不需要签名参数,可直接访问。
+
+---
+
+### 清理旧文件
+
+#### 按时间自动清理
+```bash
+# 清理 30 天前的构建
+mc rm --recursive --force --older-than 30d rustfs/my-bucket/builds/
+
+# 清理 7 天前的日志
+mc rm --recursive --force --older-than 7d rustfs/my-bucket/logs/
+
+# 清理 90 天前的备份
+mc rm --recursive --force --older-than 90d rustfs/my-bucket/backups/
+```
+
+#### 保留最近 N 个版本
+```bash
+# 列出文件按时间排序
+mc ls --recursive rustfs/my-bucket/releases/ | sort -k1,1 -r | tail -n +11 | awk '{print $NF}' | while read file; do
+ mc rm rustfs/my-bucket/releases/$file
+done
+```
+
+---
+
+## 临时链接详解
+
+### 临时链接的工作原理
+
+临时下载链接(预签名 URL)使用 AWS Signature V4 算法生成,包含:
+- 访问凭证(已签名)
+- 过期时间戳
+- 请求参数的签名
+
+**优势:**
+- ✅ 无需认证即可下载
+- ✅ 可设置精确的过期时间
+- ✅ 可分享给任何人
+- ✅ 自动过期,安全可控
+
+### 有效期时间单位
+
+| 单位 | 说明 | 示例 |
+|------|------|------|
+| `m` | 分钟 (minutes) | `30m` = 30分钟 |
+| `h` | 小时 (hours) | `6h` = 6小时 |
+| `d` | 天 (days) | `7d` = 7天 |
+
+### 有效期限制
+
+- **最小值**: 1 分钟 (`1m`)
+- **默认值**: 7 天 (`7d`)
+- **最大值**: 取决于 S3 服务器配置(通常为 7 天)
+
+### 常见使用场景
+
+#### 场景 1: 内部快速分享(短期)
+```bash
+# 5分钟有效,用于即时分享
+mc share download --expire 5m rustfs/bucket/temp-file.zip
+```
+
+#### 场景 2: 客户下载(中期)
+```bash
+# 24小时有效,给客户足够时间下载
+mc share download --expire 24h rustfs/bucket/client-delivery.zip
+```
+
+#### 场景 3: 归档访问(长期)
+```bash
+# 7天有效,用于归档文件的临时访问
+mc share download --expire 7d rustfs/bucket/archive/report.pdf
+```
+
+#### 场景 4: 公开发布(永久)
+```bash
+# 不设置过期时间,使用公开访问
+mc anonymous set download rustfs/bucket/public/
+# 现在可以直接访问: https://server/bucket/public/file.jar
+```
+
+---
+
+## 命令速查表
+
+### 文件操作
+
+| 操作 | 命令 |
+|------|------|
+| 上传文件 | `mc cp file.txt rustfs/bucket/` |
+| 上传目录 | `mc cp --recursive ./dir/ rustfs/bucket/` |
+| 下载文件 | `mc cp rustfs/bucket/file.txt ./` |
+| 下载目录 | `mc cp --recursive rustfs/bucket/dir/ ./` |
+| 删除文件 | `mc rm rustfs/bucket/file.txt` |
+| 删除目录 | `mc rm --recursive --force rustfs/bucket/dir/` |
+| 列出文件 | `mc ls rustfs/bucket/` |
+| 递归列出 | `mc ls --recursive rustfs/bucket/` |
+| 查看信息 | `mc stat rustfs/bucket/file.txt` |
+
+### 临时链接
+
+| 操作 | 命令 |
+|------|------|
+| 生成链接(默认7天) | `mc share download rustfs/bucket/file.jar` |
+| 1小时有效 | `mc share download --expire 1h rustfs/bucket/file.jar` |
+| 1天有效 | `mc share download --expire 1d rustfs/bucket/file.jar` |
+| 7天有效 | `mc share download --expire 7d rustfs/bucket/file.jar` |
+| 30分钟有效 | `mc share download --expire 30m rustfs/bucket/file.jar` |
+
+### 清理操作
+
+| 操作 | 命令 |
+|------|------|
+| 删除30天前文件 | `mc rm --recursive --older-than 30d rustfs/bucket/` |
+| 删除7天前文件 | `mc rm --recursive --older-than 7d rustfs/bucket/` |
+| 删除所有.log | `mc rm --recursive rustfs/bucket/*.log` |
+| 强制删除 | `mc rm --recursive --force rustfs/bucket/dir/` |
+
+### 权限操作
+
+| 操作 | 命令 |
+|------|------|
+| 设置公开下载 | `mc anonymous set download rustfs/bucket/` |
+| 设置公开上传 | `mc anonymous set upload rustfs/bucket/` |
+| 设置公开读写 | `mc anonymous set public rustfs/bucket/` |
+| 取消公开访问 | `mc anonymous set none rustfs/bucket/` |
+| 查看权限 | `mc anonymous get rustfs/bucket/` |
+
+---
+
+## 最佳实践
+
+### 1. 密钥管理
+```yaml
+# ✅ 使用 Woodpecker Secrets
+environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+
+# ❌ 不要硬编码密钥
+environment:
+ AWS_ACCESS_KEY_ID: "AKIAIOSFODNN7EXAMPLE"
+```
+
+### 2. 错误处理
+```bash
+set -e # 遇到错误立即退出
+
+# 上传并验证
+mc cp file.jar rustfs/bucket/
+if mc stat rustfs/bucket/file.jar &>/dev/null; then
+ echo "✅ Upload successful"
+else
+ echo "❌ Upload failed"
+ exit 1
+fi
+```
+
+### 3. 按日期组织
+```bash
+# 按日期创建目录
+BUILD_DATE=$(date +%Y%m%d)
+mc cp dist/ rustfs/bucket/builds/${BUILD_DATE}/
+```
+
+### 4. 定期清理
+```bash
+# 在 CI 中定期清理旧构建
+mc rm --recursive --older-than 30d rustfs/bucket/builds/
+```
+
+### 5. 生成有意义的链接
+```bash
+# 保存链接到文件,方便分享
+BUILD_ID=${CI_PIPELINE_NUMBER}
+mc share download --expire 7d rustfs/bucket/app-${BUILD_ID}.jar > download-${BUILD_ID}.txt
+```
+
+### 6. 验证上传
+```bash
+# 上传后获取 ETag 验证完整性
+mc cp file.jar rustfs/bucket/
+ETAG=$(mc stat rustfs/bucket/file.jar | grep ETag | awk '{print $2}')
+echo "File uploaded with ETag: $ETAG"
+```
+
+---
+
+## 使用示例
+
+### 示例 1: 上传并生成下载链接
+
+```bash
+#!/bin/bash
+set -e
+
+# 配置
+mc alias set rustfs https://s3.example.com:9000 $ACCESS_KEY $SECRET_KEY
+
+# 上传文件
+mc cp myapp.jar rustfs/releases/v1.0.0/myapp.jar
+
+# 生成 24 小时有效的下载链接
+DOWNLOAD_LINK=$(mc share download --expire 24h rustfs/releases/v1.0.0/myapp.jar | grep Share | awk '{print $2}')
+
+echo "✅ 上传成功!"
+echo "📥 下载链接: $DOWNLOAD_LINK"
+echo "⏰ 有效期: 24 小时"
+```
+
+### 示例 2: 清理旧构建并保留最新 10 个
+
+```bash
+#!/bin/bash
+set -e
+
+mc alias set rustfs https://s3.example.com:9000 $ACCESS_KEY $SECRET_KEY
+
+# 列出所有构建,按时间排序,删除 30 天前的
+echo "🧹 清理旧构建..."
+mc rm --recursive --older-than 30d rustfs/builds/
+
+# 统计剩余构建数
+REMAINING=$(mc ls --recursive rustfs/builds/ | wc -l)
+echo "✅ 清理完成,剩余 $REMAINING 个构建"
+```
+
+### 示例 3: 批量生成下载链接
+
+```bash
+#!/bin/bash
+set -e
+
+mc alias set rustfs https://s3.example.com:9000 $ACCESS_KEY $SECRET_KEY
+
+echo "# 下载链接汇总" > links.md
+echo "生成时间: $(date)" >> links.md
+echo "" >> links.md
+
+# 为 releases 目录下所有文件生成链接
+mc ls rustfs/releases/ | awk '{print $NF}' | while read file; do
+ LINK=$(mc share download --expire 7d rustfs/releases/$file | grep Share | awk '{print $2}')
+ echo "- [$file]($LINK)" >> links.md
+done
+
+echo "✅ 链接汇总已保存到 links.md"
+```
+
+---
+
+## 参考资源
+
+- [MinIO Client 完整文档](https://min.io/docs/minio/linux/reference/minio-mc.html)
+- [mc GitHub 仓库](https://github.com/minio/mc)
+- [RustFS 文档](https://docs.rustfs.com)
+- [AWS S3 API 参考](https://docs.aws.amazon.com/s3/)
+
+---
+
+## 故障排查
+
+### 问题 1: mc 命令未找到
+```bash
+# 重新安装
+curl -sSL https://dl.min.io/client/mc/release/linux-amd64/mc -o /usr/local/bin/mc
+chmod +x /usr/local/bin/mc
+```
+
+### 问题 2: 连接失败
+```bash
+# 检查别名配置
+mc alias list
+
+# 重新配置
+mc alias set rustfs https://your-server:9000 ACCESS_KEY SECRET_KEY
+
+# 测试连接
+mc admin info rustfs
+```
+
+### 问题 3: 权限被拒绝
+```bash
+# 检查访问密钥是否正确
+mc admin user info rustfs ACCESS_KEY
+
+# 验证 bucket 权限
+mc anonymous get rustfs/bucket/
+```
+
+### 问题 4: 临时链接无法访问
+```bash
+# 检查链接是否过期
+# 重新生成链接
+mc share download --expire 1d rustfs/bucket/file.jar
+
+# 或设置为公开访问
+mc anonymous set download rustfs/bucket/
+```
+
+---
+
+**最后更新**: 2025-10-12
+**维护者**: RustFS Team
\ No newline at end of file
diff --git a/mc/linux/mc-linux-woodpecker.yml b/mc/linux/mc-linux-woodpecker.yml
new file mode 100644
index 0000000..86af076
--- /dev/null
+++ b/mc/linux/mc-linux-woodpecker.yml
@@ -0,0 +1,478 @@
+# Woodpecker CI - MinIO Client (mc) 完整示例 - Linux 环境
+#
+# 此配置展示了在 Linux 环境中使用 mc 操作 RustFS/S3 对象存储的所有常见场景
+# 包括:安装、上传、下载、生成临时链接、清理等
+
+labels:
+ platform: linux/amd64
+
+when:
+ event: [push, manual, tag, pull_request]
+
+steps:
+ # ====================================================
+ # 步骤 1: 安装 MinIO Client (mc)
+ # ====================================================
+ - name: install-mc
+ image: alpine:latest
+ commands:
+ - |
+ echo "📦 Installing MinIO Client (mc)..."
+
+ # 下载 mc
+ wget -q https://dl.min.io/client/mc/release/linux-amd64/mc -O /usr/local/bin/mc
+ chmod +x /usr/local/bin/mc
+
+ # 验证安装
+ mc --version
+ echo "✅ mc installed successfully"
+
+ # ====================================================
+ # 步骤 2: 构建项目(模拟)
+ # ====================================================
+ - name: build-project
+ image: alpine:latest
+ commands:
+ - |
+ echo "🔨 Building project..."
+
+ # 创建构建产物
+ mkdir -p dist/assets release
+
+ # 模拟构建文件
+ echo "Application binary $(date)" > dist/app.jar
+ echo "body { color: #333; }" > dist/assets/style.css
+ echo "console.log('app');" > dist/assets/app.js
+ echo "Hello World" > dist/index.html
+ echo "Build log at $(date)" > dist/build.log
+
+ # 模拟发布版本
+ echo "Release v1.0.0 $(date)" > release/app-v1.0.0.jar
+
+ echo "✅ Build completed"
+ ls -lh dist/ release/
+
+ # ====================================================
+ # 步骤 3: 上传文件到 S3(基础示例)
+ # ====================================================
+ - name: upload-basic
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📤 Uploading files to S3..."
+
+ # 配置 mc alias
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ # 创建按日期分类的路径
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 上传单个文件
+ echo "📦 Uploading app.jar..."
+ mc cp dist/app.jar rustfs/${BUILD_PATH}/app.jar
+
+ # 验证上传
+ if mc stat rustfs/${BUILD_PATH}/app.jar &>/dev/null; then
+ ETAG=$(mc stat rustfs/${BUILD_PATH}/app.jar | grep ETag | awk '{print $2}')
+ echo "✅ Upload successful! ETag: ${ETAG}"
+ else
+ echo "❌ Upload failed!"
+ exit 1
+ fi
+
+ # ====================================================
+ # 步骤 4: 上传整个目录(递归上传)
+ # ====================================================
+ - name: upload-directory
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📤 Uploading entire directory..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 递归上传整个 dist 目录
+ echo "📦 Uploading dist/ directory..."
+ mc cp --recursive dist/ rustfs/${BUILD_PATH}/dist/
+
+ # 列出上传的文件
+ echo "✅ Uploaded files:"
+ mc ls --recursive rustfs/${BUILD_PATH}/dist/
+
+ # ====================================================
+ # 步骤 5: 生成临时下载链接(多种时长示例)
+ # ====================================================
+ - name: generate-download-links
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "🔗 Generating temporary download links..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ echo ""
+ echo "============================================="
+ echo "📥 临时下载链接"
+ echo "============================================="
+ echo ""
+
+ # 1. 短期链接(30分钟)- 用于快速分享
+ echo "🔵 短期链接(30分钟有效期):"
+ mc share download --expire 30m rustfs/${BUILD_PATH}/app.jar
+ echo ""
+
+ # 2. 中期链接(24小时)- 用于日常分享
+ echo "🟢 中期链接(24小时有效期):"
+ mc share download --expire 24h rustfs/${BUILD_PATH}/app.jar
+ echo ""
+
+ # 3. 长期链接(7天)- 用于归档访问
+ echo "🟡 长期链接(7天有效期):"
+ mc share download --expire 7d rustfs/${BUILD_PATH}/app.jar
+ echo ""
+
+ # 4. 为所有文件批量生成链接
+ echo "🔵 批量生成所有文件的下载链接(24小时):"
+ mc ls rustfs/${BUILD_PATH}/dist/ | awk '{print $NF}' | while read file; do
+ if [ ! -z "$file" ]; then
+ echo "📦 $file:"
+ mc share download --expire 24h rustfs/${BUILD_PATH}/dist/$file 2>/dev/null | grep Share || true
+ echo ""
+ fi
+ done
+
+ echo "============================================="
+
+ # ====================================================
+ # 步骤 6: 保存下载链接到文件
+ # ====================================================
+ - name: save-download-links
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "💾 Saving download links to file..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 创建下载链接文件
+ cat > download-links.txt <> download-links.txt
+ mc share download --expire 24h rustfs/${BUILD_PATH}/app.jar >> download-links.txt 2>&1 || true
+ echo "" >> download-links.txt
+
+ echo "## Build Log:" >> download-links.txt
+ mc share download --expire 24h rustfs/${BUILD_PATH}/dist/build.log >> download-links.txt 2>&1 || true
+ echo "" >> download-links.txt
+
+ # 显示内容
+ cat download-links.txt
+
+ # 上传链接文件到 S3
+ mc cp download-links.txt rustfs/${BUILD_PATH}/DOWNLOAD-LINKS.txt
+ echo "✅ Download links saved to S3"
+
+ # ====================================================
+ # 步骤 7: 列举和查询文件
+ # ====================================================
+ - name: list-files
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📋 Listing files in S3..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+
+ # 列出今天的所有构建
+ echo "📦 Today's builds:"
+ mc ls rustfs/${S3_BUCKET}/builds/${BUILD_DATE}/
+ echo ""
+
+ # 递归列出当前构建的所有文件
+ echo "📦 Current build files:"
+ mc ls --recursive rustfs/${S3_BUCKET}/builds/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}/
+ echo ""
+
+ # 获取文件详细信息
+ echo "📊 File details:"
+ mc stat rustfs/${S3_BUCKET}/builds/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}/app.jar
+
+ # ====================================================
+ # 步骤 8: 下载文件(验证)
+ # ====================================================
+ - name: download-verify
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📥 Downloading and verifying files..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 下载文件
+ mkdir -p download-test
+ mc cp rustfs/${BUILD_PATH}/app.jar download-test/
+
+ # 验证文件
+ if [ -f download-test/app.jar ]; then
+ echo "✅ File downloaded successfully"
+ ls -lh download-test/
+ cat download-test/app.jar
+ else
+ echo "❌ Download failed"
+ exit 1
+ fi
+
+ # ====================================================
+ # 步骤 9: 清理旧构建(30天前)
+ # ====================================================
+ - name: cleanup-old-builds
+ image: alpine:latest
+ when:
+ branch: main
+ event: push
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "🧹 Cleaning up old builds..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ # 删除 30 天前的构建
+ echo "🗑️ Removing builds older than 30 days..."
+ mc rm --recursive --force --older-than 30d rustfs/${S3_BUCKET}/builds/ || true
+
+ # 显示剩余的构建
+ echo "✅ Cleanup completed. Remaining builds:"
+ mc ls --recursive --summarize rustfs/${S3_BUCKET}/builds/ || echo "No builds found"
+
+ # ====================================================
+ # 步骤 10: 发布版本(仅 tag 触发)
+ # ====================================================
+ - name: publish-release
+ image: alpine:latest
+ when:
+ event: tag
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "🚀 Publishing release..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ TAG_NAME=${CI_COMMIT_TAG}
+ RELEASE_PATH="${S3_BUCKET}/releases/${TAG_NAME}"
+
+ # 上传发布版本
+ mc cp release/app-${TAG_NAME}.jar rustfs/${RELEASE_PATH}/app-${TAG_NAME}.jar
+
+ # 验证上传
+ if mc stat rustfs/${RELEASE_PATH}/app-${TAG_NAME}.jar &>/dev/null; then
+ ETAG=$(mc stat rustfs/${RELEASE_PATH}/app-${TAG_NAME}.jar | grep ETag | awk '{print $2}')
+
+ echo "✅ Release published successfully!"
+ echo "📌 Tag: ${TAG_NAME}"
+ echo "📦 ETag: ${ETAG}"
+ echo "🔗 Path: ${RELEASE_PATH}/app-${TAG_NAME}.jar"
+
+ # 生成永久下载链接(7天)
+ echo ""
+ echo "📥 Download link (7 days):"
+ mc share download --expire 7d rustfs/${RELEASE_PATH}/app-${TAG_NAME}.jar
+
+ # 可选:设置为公开访问(永久链接)
+ # mc anonymous set download rustfs/${RELEASE_PATH}/
+ # echo "🌐 Public URL: ${S3_ENDPOINT}/${RELEASE_PATH}/app-${TAG_NAME}.jar"
+ else
+ echo "❌ Release failed!"
+ exit 1
+ fi
+
+ # ====================================================
+ # 步骤 11: 生成构建报告
+ # ====================================================
+ - name: build-report
+ image: alpine:latest
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📊 Generating build report..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 生成 Markdown 报告
+ cat > BUILD-REPORT.md </dev/null | grep Size | awk '{print $2}' || echo "N/A")
+ LINK=$(mc share download --expire 7d rustfs/${BUILD_PATH}/$file 2>/dev/null | grep Share | awk '{print $2}' || echo "N/A")
+ echo "- **$file** (${SIZE})" >> BUILD-REPORT.md
+ echo " - [Download Link]($LINK)" >> BUILD-REPORT.md
+ echo "" >> BUILD-REPORT.md
+ fi
+ done
+
+ # 显示报告
+ cat BUILD-REPORT.md
+
+ # 上传报告到 S3
+ mc cp BUILD-REPORT.md rustfs/${BUILD_PATH}/
+ echo "✅ Build report uploaded"
+
+# ====================================================
+# 说明
+# ====================================================
+#
+# 使用方法:
+# 1. 在 Woodpecker UI 中配置以下 secrets:
+# - AWS_ACCESS_KEY_ID
+# - AWS_SECRET_ACCESS_KEY
+# - S3_BUCKET
+# - S3_ENDPOINT
+#
+# 2. 将此文件保存为 .woodpecker.yml
+#
+# 3. 推送代码触发构建,或手动触发
+#
+# 4. 查看每个步骤的输出,了解 mc 的各种用法
+#
+# 临时链接时长说明:
+# - 30m = 30 分钟(短期分享)
+# - 1h = 1 小时
+# - 6h = 6 小时
+# - 24h = 24 小时(1天)
+# - 3d = 3 天
+# - 7d = 7 天(默认和最大值)
+#
+# 清理策略说明:
+# - --older-than 30d 删除 30 天前的文件
+# - --older-than 7d 删除 7 天前的文件
+# - --older-than 90d 删除 90 天前的文件
+#
+# ====================================================
\ No newline at end of file
diff --git a/mc/macos/mc-macos-woodpecker.yml b/mc/macos/mc-macos-woodpecker.yml
new file mode 100644
index 0000000..c920956
--- /dev/null
+++ b/mc/macos/mc-macos-woodpecker.yml
@@ -0,0 +1,604 @@
+# Woodpecker CI - MinIO Client (mc) 完整示例 - macOS 环境
+#
+# 此配置展示了在 macOS 环境中使用 mc 操作 RustFS/S3 对象存储的所有常见场景
+# 包括:安装、上传、下载、生成临时链接、清理等
+#
+# 适用于:macOS local/exec agent(如 Mac mini)
+
+labels:
+ host: Mac-mini.local
+ platform: darwin/arm64
+ # 如果是 Intel Mac,使用 darwin/amd64
+
+when:
+ event: [push, manual, tag, pull_request]
+
+steps:
+ # ====================================================
+ # 步骤 1: 安装 MinIO Client (mc)
+ # ====================================================
+ - name: install-mc
+ image: /bin/zsh
+ commands:
+ - |
+ echo "📦 Installing MinIO Client (mc)..."
+
+ # 检查是否已安装
+ if command -v mc &> /dev/null; then
+ echo "✅ mc already installed: $(mc --version)"
+ else
+ # 根据架构下载
+ ARCH=$(uname -m)
+ if [[ "$ARCH" == "arm64" ]]; then
+ MC_URL="https://dl.min.io/client/mc/release/darwin-arm64/mc"
+ else
+ MC_URL="https://dl.min.io/client/mc/release/darwin-amd64/mc"
+ fi
+
+ echo "📥 Downloading mc for ${ARCH}..."
+ curl -sSL ${MC_URL} -o /usr/local/bin/mc
+ chmod +x /usr/local/bin/mc
+
+ # 验证安装
+ mc --version
+ echo "✅ mc installed successfully"
+ fi
+
+ # ====================================================
+ # 步骤 2: 构建项目(模拟)
+ # ====================================================
+ - name: build-project
+ image: /bin/zsh
+ commands:
+ - |
+ echo "🔨 Building project on macOS..."
+
+ # 创建构建产物
+ mkdir -p dist/assets release
+
+ # 模拟构建文件
+ echo "Application binary built on macOS at $(date)" > dist/app.jar
+ echo "body { color: #333; font-family: -apple-system; }" > dist/assets/style.css
+ echo "console.log('macOS build');" > dist/assets/app.js
+ echo "Built on macOS
" > dist/index.html
+ echo "Build log on macOS at $(date)" > dist/build.log
+
+ # 模拟发布版本
+ echo "Release v1.0.0 built on macOS at $(date)" > release/app-v1.0.0.jar
+
+ echo "✅ Build completed"
+ ls -lh dist/ release/
+
+ # ====================================================
+ # 步骤 3: 上传文件到 S3(基础示例)
+ # ====================================================
+ - name: upload-basic
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📤 Uploading files to S3..."
+
+ # 配置 mc alias
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ # 创建按日期分类的路径
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/macos/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 上传单个文件
+ echo "📦 Uploading app.jar..."
+ mc cp dist/app.jar rustfs/${BUILD_PATH}/app.jar
+
+ # 验证上传
+ if mc stat rustfs/${BUILD_PATH}/app.jar &>/dev/null; then
+ ETAG=$(mc stat rustfs/${BUILD_PATH}/app.jar | grep ETag | awk '{print $2}')
+ echo "✅ Upload successful! ETag: ${ETAG}"
+ else
+ echo "❌ Upload failed!"
+ exit 1
+ fi
+
+ # ====================================================
+ # 步骤 4: 上传整个目录(递归上传)
+ # ====================================================
+ - name: upload-directory
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📤 Uploading entire directory..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/macos/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 递归上传整个 dist 目录
+ echo "📦 Uploading dist/ directory..."
+ mc cp --recursive dist/ rustfs/${BUILD_PATH}/dist/
+
+ # 列出上传的文件
+ echo "✅ Uploaded files:"
+ mc ls --recursive rustfs/${BUILD_PATH}/dist/
+
+ # ====================================================
+ # 步骤 5: 生成临时下载链接(多种时长示例)
+ # ====================================================
+ - name: generate-download-links
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "🔗 Generating temporary download links..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/macos/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ echo ""
+ echo "============================================="
+ echo "📥 临时下载链接(macOS 构建)"
+ echo "============================================="
+ echo ""
+
+ # 1. 短期链接(30分钟)- 用于快速分享
+ echo "🔵 短期链接(30分钟有效期):"
+ mc share download --expire 30m rustfs/${BUILD_PATH}/app.jar
+ echo ""
+
+ # 2. 中期链接(24小时)- 用于日常分享
+ echo "🟢 中期链接(24小时有效期):"
+ mc share download --expire 24h rustfs/${BUILD_PATH}/app.jar
+ echo ""
+
+ # 3. 长期链接(7天)- 用于归档访问
+ echo "🟡 长期链接(7天有效期):"
+ mc share download --expire 7d rustfs/${BUILD_PATH}/app.jar
+ echo ""
+
+ # 4. 自定义时长示例
+ echo "🔷 其他时长示例:"
+ echo ""
+ echo "1小时有效:"
+ mc share download --expire 1h rustfs/${BUILD_PATH}/app.jar
+ echo ""
+ echo "6小时有效:"
+ mc share download --expire 6h rustfs/${BUILD_PATH}/app.jar
+ echo ""
+ echo "3天有效:"
+ mc share download --expire 3d rustfs/${BUILD_PATH}/app.jar
+ echo ""
+
+ # 5. 为所有文件批量生成链接
+ echo "🔵 批量生成所有文件的下载链接(24小时):"
+ mc ls rustfs/${BUILD_PATH}/dist/ | awk '{print $NF}' | while read file; do
+ if [[ -n "$file" ]]; then
+ echo "📦 $file:"
+ mc share download --expire 24h rustfs/${BUILD_PATH}/dist/$file 2>/dev/null | grep Share || true
+ echo ""
+ fi
+ done
+
+ echo "============================================="
+
+ # ====================================================
+ # 步骤 6: 保存下载链接到文件并自动打开
+ # ====================================================
+ - name: save-download-links
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "💾 Saving download links to file..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/macos/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 创建下载链接文件
+ cat > download-links.txt <> download-links.txt
+ mc share download --expire 24h rustfs/${BUILD_PATH}/app.jar >> download-links.txt 2>&1 || true
+ echo "" >> download-links.txt
+
+ echo "## Build Log:" >> download-links.txt
+ mc share download --expire 24h rustfs/${BUILD_PATH}/dist/build.log >> download-links.txt 2>&1 || true
+ echo "" >> download-links.txt
+
+ # 显示内容
+ cat download-links.txt
+
+ # 上传链接文件到 S3
+ mc cp download-links.txt rustfs/${BUILD_PATH}/DOWNLOAD-LINKS.txt
+ echo "✅ Download links saved to S3"
+
+ # macOS 特有:可以用 open 命令打开文件
+ # open download-links.txt
+
+ # ====================================================
+ # 步骤 7: 列举和查询文件
+ # ====================================================
+ - name: list-files
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📋 Listing files in S3..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+
+ # 列出今天的所有 macOS 构建
+ echo "📦 Today's macOS builds:"
+ mc ls rustfs/${S3_BUCKET}/builds/macos/${BUILD_DATE}/
+ echo ""
+
+ # 递归列出当前构建的所有文件
+ echo "📦 Current build files:"
+ mc ls --recursive rustfs/${S3_BUCKET}/builds/macos/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}/
+ echo ""
+
+ # 获取文件详细信息
+ echo "📊 File details:"
+ mc stat rustfs/${S3_BUCKET}/builds/macos/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}/app.jar
+
+ # ====================================================
+ # 步骤 8: 下载文件(验证)
+ # ====================================================
+ - name: download-verify
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📥 Downloading and verifying files..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/macos/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 下载文件
+ mkdir -p download-test
+ mc cp rustfs/${BUILD_PATH}/app.jar download-test/
+
+ # 验证文件
+ if [[ -f download-test/app.jar ]]; then
+ echo "✅ File downloaded successfully"
+ ls -lh download-test/
+ cat download-test/app.jar
+ else
+ echo "❌ Download failed"
+ exit 1
+ fi
+
+ # ====================================================
+ # 步骤 9: 清理旧构建(30天前)
+ # ====================================================
+ - name: cleanup-old-builds
+ image: /bin/zsh
+ when:
+ branch: main
+ event: push
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "🧹 Cleaning up old macOS builds..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ # 删除 30 天前的 macOS 构建
+ echo "🗑️ Removing macOS builds older than 30 days..."
+ mc rm --recursive --force --older-than 30d rustfs/${S3_BUCKET}/builds/macos/ || true
+
+ # 显示剩余的构建
+ echo "✅ Cleanup completed. Remaining macOS builds:"
+ mc ls --recursive --summarize rustfs/${S3_BUCKET}/builds/macos/ || echo "No builds found"
+
+ # ====================================================
+ # 步骤 10: 发布版本(仅 tag 触发)
+ # ====================================================
+ - name: publish-release
+ image: /bin/zsh
+ when:
+ event: tag
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "🚀 Publishing macOS release..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ TAG_NAME=${CI_COMMIT_TAG}
+ RELEASE_PATH="${S3_BUCKET}/releases/macos/${TAG_NAME}"
+
+ # 上传发布版本
+ mc cp release/app-${TAG_NAME}.jar rustfs/${RELEASE_PATH}/app-macos-${TAG_NAME}.jar
+
+ # 验证上传
+ if mc stat rustfs/${RELEASE_PATH}/app-macos-${TAG_NAME}.jar &>/dev/null; then
+ ETAG=$(mc stat rustfs/${RELEASE_PATH}/app-macos-${TAG_NAME}.jar | grep ETag | awk '{print $2}')
+
+ echo "✅ macOS Release published successfully!"
+ echo "📌 Tag: ${TAG_NAME}"
+ echo "📦 ETag: ${ETAG}"
+ echo "🔗 Path: ${RELEASE_PATH}/app-macos-${TAG_NAME}.jar"
+ echo "🍎 Platform: macOS $(sw_vers -productVersion)"
+
+ # 生成多种时长的下载链接
+ echo ""
+ echo "📥 Download links:"
+ echo ""
+ echo "1小时有效(临时分享):"
+ mc share download --expire 1h rustfs/${RELEASE_PATH}/app-macos-${TAG_NAME}.jar
+ echo ""
+ echo "7天有效(正式发布):"
+ mc share download --expire 7d rustfs/${RELEASE_PATH}/app-macos-${TAG_NAME}.jar
+
+ # 可选:设置为公开访问(永久链接)
+ # mc anonymous set download rustfs/${RELEASE_PATH}/
+ # echo ""
+ # echo "🌐 Public URL: ${S3_ENDPOINT}/${RELEASE_PATH}/app-macos-${TAG_NAME}.jar"
+ else
+ echo "❌ Release failed!"
+ exit 1
+ fi
+
+ # ====================================================
+ # 步骤 11: 生成构建报告(macOS 风格)
+ # ====================================================
+ - name: build-report
+ image: /bin/zsh
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "📊 Generating macOS build report..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/macos/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 获取系统信息
+ MACOS_VERSION=$(sw_vers -productVersion)
+ ARCH=$(uname -m)
+
+ # 生成 Markdown 报告
+ cat > BUILD-REPORT.md </dev/null | grep Size | awk '{print $2}' || echo "N/A")
+
+ # 生成多个时长的链接
+ LINK_1H=$(mc share download --expire 1h rustfs/${BUILD_PATH}/$file 2>/dev/null | grep Share | awk '{print $2}' || echo "N/A")
+ LINK_24H=$(mc share download --expire 24h rustfs/${BUILD_PATH}/$file 2>/dev/null | grep Share | awk '{print $2}' || echo "N/A")
+ LINK_7D=$(mc share download --expire 7d rustfs/${BUILD_PATH}/$file 2>/dev/null | grep Share | awk '{print $2}' || echo "N/A")
+
+ echo "### $file" >> BUILD-REPORT.md
+ echo "" >> BUILD-REPORT.md
+ echo "- **Size**: ${SIZE}" >> BUILD-REPORT.md
+ echo "- **Download Links**:" >> BUILD-REPORT.md
+ echo " - [1小时有效]($LINK_1H)" >> BUILD-REPORT.md
+ echo " - [24小时有效]($LINK_24H)" >> BUILD-REPORT.md
+ echo " - [7天有效]($LINK_7D)" >> BUILD-REPORT.md
+ echo "" >> BUILD-REPORT.md
+ fi
+ done
+
+ # 添加系统信息
+ cat >> BUILD-REPORT.md </dev/null | head -n1 || echo "N/A")
+
+EOF
+
+ # 显示报告
+ cat BUILD-REPORT.md
+
+ # 上传报告到 S3
+ mc cp BUILD-REPORT.md rustfs/${BUILD_PATH}/
+ echo "✅ Build report uploaded"
+
+ # macOS 特有:可以用 open 命令在浏览器中预览
+ # open BUILD-REPORT.md
+
+ # ====================================================
+ # 步骤 12: 使用 Finder 快速操作(macOS 特有)
+ # ====================================================
+ - name: macos-integration
+ image: /bin/zsh
+ when:
+ event: manual
+ environment:
+ AWS_ACCESS_KEY_ID:
+ from_secret: AWS_ACCESS_KEY_ID
+ AWS_SECRET_ACCESS_KEY:
+ from_secret: AWS_SECRET_ACCESS_KEY
+ S3_BUCKET:
+ from_secret: S3_BUCKET
+ S3_ENDPOINT:
+ from_secret: S3_ENDPOINT
+ commands:
+ - |
+ set -e
+ echo "🍎 macOS Integration features..."
+
+ mc alias set rustfs ${S3_ENDPOINT} ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
+
+ BUILD_DATE=$(date +%Y%m%d)
+ BUILD_PATH="${S3_BUCKET}/builds/macos/${BUILD_DATE}/build-${CI_PIPELINE_NUMBER}"
+
+ # 创建桌面快捷方式脚本(可选)
+ cat > ~/Desktop/download-build.sh <