Upload Large Files to SharePoint Online

Microsoft has now increased the SharePoint per file upload size limit to 10 GB. Here’s a full list of SharePoint Online limitations. Now, this has thrown up a number of new challenges for this upload to be carried out through code.

  • Cannot keep such a large file in a single .NET object.
  • It’s very difficult to upload a file of this magnitude in a single HTTP request. If ever, the connection fails, the entire upload would have to be restarted! For a large file, this could easily turn into a nightmare for us.

Fortunately, Microsoft has now introduced 3 methods, StartUpload, ContinueUpload  and FinishUpload. These methods, uploads a large file in small chunks. The logic behind it is to break a large file in smaller chunks and then, subsequently, upload each chunk, one at a time. This is great. It solves both the problems stated above:

  • Only the current block/chunk of the file needs to be kept in the memory at any given point of time, as opposed to the entire file. So the application won’t be resource hungry.
  • Multiple HTTP requests are being sent now. So say, after uploading 2 chunks, the third one fails, we can now resume the upload from exactly the 3rd chunk itself! No need to restart the upload. This, again is very important especially, for large files.

 

Here’s the code snippet, C#.

int blockSize = 8000000; // 8 MB
string fileName = "C:\\Piyush\\9_6GB.odt", uniqueFileName = String.Empty;
long fileSize;
Microsoft.SharePoint.Client.File uploadFile = null;
Guid uploadId = Guid.NewGuid();

using (ClientContext ctx = new ClientContext("siteUrl"))
{
	ctx.Credentials = new SharePointOnlineCredentials("user@company.onmicrosoft.com", GetSecurePassword());
	List docs = ctx.Web.Lists.GetByTitle("Documents");
	ctx.Load(docs.RootFolder, p => p.ServerRelativeUrl);

	// Use large file upload approach
	ClientResult<long> bytesUploaded = null;

	FileStream fs = null;
	try
	{
		fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);

		fileSize = fs.Length;
		uniqueFileName = System.IO.Path.GetFileName(fs.Name);

		using (BinaryReader br = new BinaryReader(fs))
		{
			byte[] buffer = new byte[blockSize];
			byte[] lastBuffer = null;
			long fileoffset = 0;
			long totalBytesRead = 0;
			int bytesRead;
			bool first = true;
			bool last = false;

			// Read data from filesystem in blocks
			while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
			{
				totalBytesRead = totalBytesRead + bytesRead;

				// We've reached the end of the file
				if (totalBytesRead <= fileSize)
				{
					last = true;
					// Copy to a new buffer that has the correct size
					lastBuffer = new byte[bytesRead];
					Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
				}

				if (first)
				{
					using (MemoryStream contentStream = new MemoryStream())
					{
						// Add an empty file.
						FileCreationInformation fileInfo = new FileCreationInformation();
						fileInfo.ContentStream = contentStream;
						fileInfo.Url = uniqueFileName;
						fileInfo.Overwrite = true;
						uploadFile = docs.RootFolder.Files.Add(fileInfo);

						// Start upload by uploading the first slice.
						using (MemoryStream s = new MemoryStream(buffer))
						{
							// Call the start upload method on the first slice
							bytesUploaded = uploadFile.StartUpload(uploadId, s);
							ctx.ExecuteQuery();
							// fileoffset is the pointer where the next slice will be added
							fileoffset = bytesUploaded.Value;
						}

						// we can only start the upload once
						first = false;
					}
				}
				else
				{
					// Get a reference to our file
					uploadFile = ctx.Web.GetFileByServerRelativeUrl(docs.RootFolder.ServerRelativeUrl + System.IO.Path.AltDirectorySeparatorChar + uniqueFileName);

					if (last)
					{
						// Is this the last slice of data?
						using (MemoryStream s = new MemoryStream(lastBuffer))
						{
							// End sliced upload by calling FinishUpload
							uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
							ctx.ExecuteQuery();

							// return the file object for the uploaded file
							return uploadFile;
						}
					}
					else
					{
						using (MemoryStream s = new MemoryStream(buffer))
						{
							// Continue sliced upload
							bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
							ctx.ExecuteQuery();
							// update fileoffset for the next slice
							fileoffset = bytesUploaded.Value;
						}
					}
				}

			}
		}
	}
	finally
	{
		if (fs != null)
		{
			fs.Dispose();
		}
	}
}	

 

Key Takeaways

 

 

6 thoughts on “Upload Large Files to SharePoint Online

  1. This code helped me to move large files from one library to another library (SharePoint Online) with some modifications. Thanks!

    Like

  2. Hello Piyush, here is Problem statement –
    WHen using Upload Documents API where we sent the file (10GB) along with documentLocation, documentID to be uploaded to Sharepoint Doc library, the web server requests fails. Any idea how this can be surpassed to carry out the solution you proposed above.

    Like

  3. Hiu Piyush, thanks for this post, very helpful. So this is problem statement for us –
    We are using Upload File APIs to upload files of various sizes where in each API request, the File is sent along with FileID, FileLocatoion where it will upload to Sharepoint Doc Library. For files (> 5GB), the API request does not hit the Web Server to continue with the Upload file implementation. Any ideas on how to overcome this blocker?

    Like

  4. Hello

    I think there is one small issue with your code.

    // We’ve reached the end of the file
    if (totalBytesRead <= fileSize)

    This should probably be

    // We’ve reached the end of the file
    if (totalBytesRead >= fileSize)

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.