Upload Large Files to SharePoint Online

Microsoft has now increased the SharePoint per file upload size limit to 10 GB. Here’s a full list of SharePoint Online limitations. Now, this has thrown up a number of new challenges for this upload to be carried out through code.

  • Cannot keep such a large file in a single .NET object.
  • It’s very difficult to upload a file of this magnitude in a single HTTP request. If ever, the connection fails, the entire upload would have to be restarted! For a large file, this could easily turn into a nightmare for us.

Fortunately, Microsoft has now introduced 3 methods, StartUpload, ContinueUpload  and FinishUpload. These methods, uploads a large file in small chunks. The logic behind it is to break a large file in smaller chunks and then, subsequently, upload each chunk, one at a time. This is great. It solves both the problems stated above:

  • Only the current block/chunk of the file needs to be kept in the memory at any given point of time, as opposed to the entire file. So the application won’t be resource hungry.
  • Multiple HTTP requests are being sent now. So say, after uploading 2 chunks, the third one fails, we can now resume the upload from exactly the 3rd chunk itself! No need to restart the upload. This, again is very important especially, for large files.

 

Here’s the code snippet, C#.

int blockSize = 8000000; // 8 MB
string fileName = "C:\\Piyush\\9_6GB.odt", uniqueFileName = String.Empty;
long fileSize;
Microsoft.SharePoint.Client.File uploadFile = null;
Guid uploadId = Guid.NewGuid();

using (ClientContext ctx = new ClientContext("siteUrl"))
{
	ctx.Credentials = new SharePointOnlineCredentials("user@company.onmicrosoft.com", GetSecurePassword());
	List docs = ctx.Web.Lists.GetByTitle("Documents");
	ctx.Load(docs.RootFolder, p => p.ServerRelativeUrl);

	// Use large file upload approach
	ClientResult<long> bytesUploaded = null;

	FileStream fs = null;
	try
	{
		fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);

		fileSize = fs.Length;
		uniqueFileName = System.IO.Path.GetFileName(fs.Name);

		using (BinaryReader br = new BinaryReader(fs))
		{
			byte[] buffer = new byte[blockSize];
			byte[] lastBuffer = null;
			long fileoffset = 0;
			long totalBytesRead = 0;
			int bytesRead;
			bool first = true;
			bool last = false;

			// Read data from filesystem in blocks
			while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
			{
				totalBytesRead = totalBytesRead + bytesRead;

				// We've reached the end of the file
				if (totalBytesRead <= fileSize)
				{
					last = true;
					// Copy to a new buffer that has the correct size
					lastBuffer = new byte[bytesRead];
					Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
				}

				if (first)
				{
					using (MemoryStream contentStream = new MemoryStream())
					{
						// Add an empty file.
						FileCreationInformation fileInfo = new FileCreationInformation();
						fileInfo.ContentStream = contentStream;
						fileInfo.Url = uniqueFileName;
						fileInfo.Overwrite = true;
						uploadFile = docs.RootFolder.Files.Add(fileInfo);

						// Start upload by uploading the first slice.
						using (MemoryStream s = new MemoryStream(buffer))
						{
							// Call the start upload method on the first slice
							bytesUploaded = uploadFile.StartUpload(uploadId, s);
							ctx.ExecuteQuery();
							// fileoffset is the pointer where the next slice will be added
							fileoffset = bytesUploaded.Value;
						}

						// we can only start the upload once
						first = false;
					}
				}
				else
				{
					// Get a reference to our file
					uploadFile = ctx.Web.GetFileByServerRelativeUrl(docs.RootFolder.ServerRelativeUrl + System.IO.Path.AltDirectorySeparatorChar + uniqueFileName);

					if (last)
					{
						// Is this the last slice of data?
						using (MemoryStream s = new MemoryStream(lastBuffer))
						{
							// End sliced upload by calling FinishUpload
							uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
							ctx.ExecuteQuery();

							// return the file object for the uploaded file
							return uploadFile;
						}
					}
					else
					{
						using (MemoryStream s = new MemoryStream(buffer))
						{
							// Continue sliced upload
							bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
							ctx.ExecuteQuery();
							// update fileoffset for the next slice
							fileoffset = bytesUploaded.Value;
						}
					}
				}

			}
		}
	}
	finally
	{
		if (fs != null)
		{
			fs.Dispose();
		}
	}
}	

 

Key Takeaways

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s