Skip to content

Axios 大文件上传实现方案

大文件上传的核心痛点是上传中断后需重新上传单文件体积过大导致请求超时/失败服务端接收压力大,因此主流方案是* 分片上传 + 断点续传 + 秒传*,结合 Axios 实现可兼顾稳定性和用户体验。

一、核心思路

  1. 分片(切片)上传:将大文件按固定大小(如 5MB)分割成多个小文件块,分批上传;
  2. 断点续传:记录已上传的分片,中断后仅上传未完成的分片;
  3. 秒传:上传前先校验文件 MD5,若服务端已存在该文件,直接返回上传成功;
  4. 合并分片:所有分片上传完成后,通知服务端合并分片为完整文件。

二、前置依赖

需安装辅助库处理文件切片和 MD5 计算:

bash
# 计算文件MD5(推荐spark-md5,效率更高)
npm install spark-md5
# 核心请求库
npm install axios

三、完整实现代码

1. 前端核心代码(Vue/React/原生JS通用)

javascript
import axios from 'axios';
import SparkMD5 from 'spark-md5';

// 配置项
const UPLOAD_CONFIG = {
    chunkSize: 5 * 1024 * 1024, // 分片大小:5MB
    baseURL: 'http://localhost:3000', // 服务端地址
    timeout: 60000, // 分片上传超时时间
};

// 创建Axios实例
const uploadAxios = axios.create({
    baseURL: UPLOAD_CONFIG.baseURL,
    timeout: UPLOAD_CONFIG.timeout,
    headers: {
        'Content-Type': 'multipart/form-data',
    },
});

/**
 * 计算文件MD5(用于秒传和分片标识)
 * @param {File} file 待上传文件
 * @returns {Promise<string>} 文件MD5值
 */
const calculateFileMD5 = (file) => {
    return new Promise((resolve, reject) => {
        const fileReader = new FileReader();
        const spark = new SparkMD5.ArrayBuffer();
        const chunkSize = 2 * 1024 * 1024; // 读取文件的块大小(可自定义)
        const chunks = Math.ceil(file.size / chunkSize);
        let currentChunk = 0;

        fileReader.onload = (e) => {
            spark.append(e.target.result);
            currentChunk++;

            if (currentChunk < chunks) {
                loadNextChunk();
            } else {
                resolve(spark.end()); // 返回最终MD5
            }
        };

        fileReader.onerror = (err) => {
            reject(new Error('文件MD5计算失败:' + err.message));
        };

        const loadNextChunk = () => {
            const start = currentChunk * chunkSize;
            const end = Math.min(start + chunkSize, file.size);
            fileReader.readAsArrayBuffer(file.slice(start, end));
        };

        loadNextChunk();
    });
};

/**
 * 检查文件是否已上传(秒传/断点续传)
 * @param {string} fileMd5 文件MD5
 * @param {string} fileName 文件名
 * @returns {Promise<{ uploaded: boolean; uploadedChunks: number[] }>}
 */
const checkFileUploadStatus = async (fileMd5, fileName) => {
    try {
        const res = await uploadAxios.get('/upload/check', {
            params: {fileMd5, fileName},
        });
        return res.data;
    } catch (err) {
        console.error('检查文件状态失败:', err);
        return {uploaded: false, uploadedChunks: []};
    }
};

/**
 * 上传单个分片
 * @param {Blob} chunk 分片数据
 * @param {string} fileMd5 文件MD5
 * @param {number} chunkIndex 分片索引
 * @param {number} totalChunks 总分片数
 * @returns {Promise<boolean>}
 */
const uploadChunk = async (chunk, fileMd5, chunkIndex, totalChunks) => {
    const formData = new FormData();
    formData.append('chunk', chunk);
    formData.append('fileMd5', fileMd5);
    formData.append('chunkIndex', chunkIndex);
    formData.append('totalChunks', totalChunks);

    try {
        await uploadAxios.post('/upload/chunk', formData, {
            // 可选:上传进度回调
            onUploadProgress: (progressEvent) => {
                const percent = Math.round((progressEvent.loaded / progressEvent.total) * 100);
                console.log(`分片${chunkIndex}上传进度:${percent}%`);
            },
        });
        return true;
    } catch (err) {
        console.error(`分片${chunkIndex}上传失败:`, err);
        return false;
    }
};

/**
 * 通知服务端合并分片
 * @param {string} fileMd5 文件MD5
 * @param {string} fileName 文件名
 * @param {number} fileSize 文件大小
 * @returns {Promise<boolean>}
 */
const mergeChunks = async (fileMd5, fileName, fileSize) => {
    try {
        await uploadAxios.post('/upload/merge', {
            fileMd5,
            fileName,
            fileSize,
        });
        return true;
    } catch (err) {
        console.error('合并分片失败:', err);
        return false;
    }
};

/**
 * 大文件上传主函数
 * @param {File} file 待上传文件
 * @returns {Promise<boolean>}
 */
export const uploadLargeFile = async (file) => {
    if (!file) {
        throw new Error('请选择上传文件');
    }

    try {
        // 1. 计算文件MD5
        console.log('开始计算文件MD5...');
        const fileMd5 = await calculateFileMD5(file);
        console.log('文件MD5计算完成:', fileMd5);

        // 2. 检查文件是否已上传(秒传/断点续传)
        const {uploaded, uploadedChunks} = await checkFileUploadStatus(fileMd5, file.name);
        if (uploaded) {
            console.log('文件已存在,秒传成功!');
            return true;
        }

        // 3. 分割文件为分片
        const totalChunks = Math.ceil(file.size / UPLOAD_CONFIG.chunkSize);
        console.log(`文件分割为${totalChunks}个分片,开始上传未完成分片...`);

        // 4. 批量上传未完成的分片
        const uploadPromises = [];
        for (let i = 0; i < totalChunks; i++) {
            // 跳过已上传的分片(断点续传)
            if (uploadedChunks.includes(i)) {
                console.log(`分片${i}已上传,跳过`);
                continue;
            }

            // 切割分片
            const start = i * UPLOAD_CONFIG.chunkSize;
            const end = Math.min(start + UPLOAD_CONFIG.chunkSize, file.size);
            const chunk = file.slice(start, end);

            // 上传分片(可改为串行/并发控制,避免同时上传过多分片)
            uploadPromises.push(uploadChunk(chunk, fileMd5, i, totalChunks));

            // 可选:限制并发数(例如最多同时上传3个分片)
            // if (uploadPromises.length >= 3) {
            //   await Promise.all(uploadPromises);
            //   uploadPromises.length = 0;
            // }
        }

        // 等待所有分片上传完成
        const uploadResults = await Promise.all(uploadPromises);
        if (uploadResults.some((res) => !res)) {
            throw new Error('部分分片上传失败,请重试');
        }

        // 5. 合并分片
        console.log('所有分片上传完成,开始合并...');
        const mergeResult = await mergeChunks(fileMd5, file.name, file.size);
        if (mergeResult) {
            console.log('文件上传成功!');
            return true;
        } else {
            throw new Error('分片合并失败');
        }
    } catch (err) {
        console.error('大文件上传失败:', err);
        return false;
    }
};

2. 服务端示例(Node.js + Express)

需安装依赖:npm install express cors multer fs-extra

javascript
const express = require('express');
const cors = require('cors');
const multer = require('multer');
const fs = require('fs-extra');
const path = require('path');

const app = express();
app.use(cors());
app.use(express.json());

// 配置存储目录
const UPLOAD_DIR = path.resolve(__dirname, './uploads');
const TEMP_DIR = path.resolve(__dirname, './temp');
fs.ensureDirSync(UPLOAD_DIR);
fs.ensureDirSync(TEMP_DIR);

// 配置multer接收分片
const upload = multer({
    storage: multer.diskStorage({
        destination: (req, file, cb) => {
            const {fileMd5} = req.body;
            const chunkDir = path.join(TEMP_DIR, fileMd5);
            fs.ensureDirSync(chunkDir);
            cb(null, chunkDir);
        },
        filename: (req, file, cb) => {
            const {chunkIndex} = req.body;
            cb(null, chunkIndex); // 分片文件名 = 分片索引
        },
    }),
});

// 1. 检查文件上传状态(秒传/断点续传)
app.get('/upload/check', async (req, res) => {
    try {
        const {fileMd5, fileName} = req.query;
        const targetFilePath = path.join(UPLOAD_DIR, fileName);

        // 检查文件是否已存在(秒传)
        if (await fs.pathExists(targetFilePath)) {
            return res.json({uploaded: true, uploadedChunks: []});
        }

        // 检查临时目录中的已上传分片(断点续传)
        const chunkDir = path.join(TEMP_DIR, fileMd5);
        let uploadedChunks = [];
        if (await fs.pathExists(chunkDir)) {
            uploadedChunks = (await fs.readdir(chunkDir)).map(Number);
        }

        res.json({uploaded: false, uploadedChunks});
    } catch (err) {
        res.status(500).json({error: err.message});
    }
});

// 2. 接收分片
app.post('/upload/chunk', upload.single('chunk'), (req, res) => {
    res.json({success: true, message: '分片上传成功'});
});

// 3. 合并分片
app.post('/upload/merge', async (req, res) => {
    try {
        const {fileMd5, fileName, fileSize} = req.body;
        const chunkDir = path.join(TEMP_DIR, fileMd5);
        const targetFilePath = path.join(UPLOAD_DIR, fileName);

        // 读取所有分片并排序
        const chunkFiles = await fs.readdir(chunkDir);
        chunkFiles.sort((a, b) => Number(a) - Number(b));

        // 创建可写流,合并分片
        const writeStream = fs.createWriteStream(targetFilePath);
        for (const chunkFile of chunkFiles) {
            const chunkPath = path.join(chunkDir, chunkFile);
            const readStream = fs.createReadStream(chunkPath);
            await new Promise((resolve, reject) => {
                readStream.pipe(writeStream, {end: false});
                readStream.on('end', resolve);
                readStream.on('error', reject);
            });
            await fs.unlink(chunkPath); // 删除单个分片文件
        }
        writeStream.end();
        await fs.rmdir(chunkDir); // 删除分片目录

        res.json({success: true, message: '文件合并成功', filePath: targetFilePath});
    } catch (err) {
        res.status(500).json({error: err.message});
    }
});

// 启动服务
const PORT = 3000;
app.listen(PORT, () => {
    console.log(`服务启动成功:http://localhost:${PORT}`);
});

四、关键优化点

1. 并发控制

前端默认并发上传所有分片,若文件分片过多(如100+),可能导致浏览器请求数超限,需限制并发数:

javascript
// 优化后的分片上传逻辑(限制最大并发数为3)
const uploadChunksWithLimit = async (file, fileMd5, totalChunks, uploadedChunks) => {
    const maxConcurrent = 3; // 最大并发数
    let currentIndex = 0;
    let activePromises = [];

    while (currentIndex < totalChunks) {
        // 跳过已上传分片
        if (uploadedChunks.includes(currentIndex)) {
            currentIndex++;
            continue;
        }

        // 切割分片
        const start = currentIndex * UPLOAD_CONFIG.chunkSize;
        const end = Math.min(start + UPLOAD_CONFIG.chunkSize, file.size);
        const chunk = file.slice(start, end);

        // 上传分片并加入活跃Promise列表
        const promise = uploadChunk(chunk, fileMd5, currentIndex, totalChunks)
            .finally(() => {
                // 完成后从活跃列表移除
                activePromises = activePromises.filter(p => p !== promise);
            });
        activePromises.push(promise);

        // 达到最大并发数时,等待一个完成
        if (activePromises.length >= maxConcurrent) {
            await Promise.race(activePromises);
        }

        currentIndex++;
    }

    // 等待剩余分片上传完成
    await Promise.all(activePromises);
};

2. 重试机制

对失败的分片增加重试逻辑:

javascript
const uploadChunkWithRetry = async (chunk, fileMd5, chunkIndex, totalChunks, retryCount = 3) => {
    let attempt = 0;
    while (attempt < retryCount) {
        try {
            return await uploadChunk(chunk, fileMd5, chunkIndex, totalChunks);
        } catch (err) {
            attempt++;
            if (attempt >= retryCount) throw err;
            console.log(`分片${chunkIndex}上传失败,重试第${attempt}次...`);
            await new Promise(resolve => setTimeout(resolve, 1000 * attempt)); // 指数退避
        }
    }
};

3. 暂停/取消上传

通过AbortController实现上传暂停/取消:

javascript
// 创建AbortController实例
const abortController = new AbortController();

// 上传分片时传入signal
const uploadChunk = async (chunk, fileMd5, chunkIndex, totalChunks) => {
    const formData = new FormData();
    // ... 省略formData构造

    try {
        await uploadAxios.post('/upload/chunk', formData, {
            onUploadProgress: progressEvent => { /* 进度回调 */
            },
            signal: abortController.signal, // 绑定取消信号
        });
        return true;
    } catch (err) {
        if (err.name === 'AbortError') {
            console.log(`分片${chunkIndex}上传已取消`);
            return false;
        }
        throw err;
    }
};

// 取消上传
const cancelUpload = () => {
    abortController.abort();
};

五、注意事项

  1. 服务端存储:临时分片目录需定期清理(如通过定时任务删除超过24小时的未合并分片);
  2. MD5计算优化:大文件MD5计算可能耗时,可放在Web Worker中执行,避免阻塞主线程;
  3. 跨域配置:服务端需正确配置CORS,允许Content-Type: multipart/form-data和自定义头;
  4. 文件大小限制:服务端需调整请求体大小限制(Express默认100kb),可通过app.use(express.json({ limit: '50mb' })) 和multer配置;
  5. 断点续传持久化:可将已上传分片索引存储到localStorage,刷新页面后仍可恢复上传。

六、扩展功能

  • 上传进度总览:汇总所有分片的上传进度,计算整体进度;
  • 文件类型校验:前端/服务端双重校验文件类型,防止恶意上传;
  • 限速上传:通过控制分片上传速度,避免占用过多带宽;
  • 分片校验:服务端接收分片后校验MD5,确保分片完整性。

这是我的个人文档