Architecting and selling storage solutions like this is what I do for a living.
Cloud can mean any number of things but at its root it just means "someone else's computer" so the most important thing to consider is who's computer are we talking about. I like to steer my customers to Microsoft Azure or Amazon S3 and occasionally Glacier. Those providers promise 13 nines of availability which means your data will be available 99.99999999999% of the time or 31,535,999.99999 seconds out of the 31,536,000 seconds in a year. The best on-premise, customer-owned storage equipment advertises six nines (99.9999%) of availability which translates to 31.5 seconds of unplanned downtime in a year. Now those are just crazy-stupid numbers that are large aggregates, but roughly-speaking we're talking about about 30x better availability when you're using cloud storage from the majors.
The costs that are involved, however, can make a big difference. Depending on your volume, the OPEX cost of the cloud provider can add up over time to be greater than an on-prem solution, but at the volumes you're talking about you're going to be roughly $0.05/GiB/month or right around $100. You can reduce that cost if you reduce the protection level -- I think it's Azure that does a nickle-a-gig for 3x geo-location but they have lower cost plans if you have fewer copies. If you stretch that out over 36 months, you're at about $3,600 which could buy you a low-end solution but won't get you anywhere near the availability that googlezon can provide. You're probably looking at a three-nine or four-nine solution at best in that price range and I'm being somewhat charitable.
Other costs to worry about are security, data sovereignty, transmission, and performance. Security is a big one, obviously, and it will depend on what kind of data you're working with. I will tell you that, if done right, the cloud can be made quite secure -- there are cloud solutions that are permitted for the storage of ITAR data. Data sovereignty comes in to play if you're working with data in other countries as many jurisdictions require that data created there has to stay there. Transmission can be a problem in that if you have an unreliable link, your workflow could get disrupted by link failures, plus you may need additional bandwidth to accommodate the additional traffic that used to be confined to the local network.
Performance is a big one that deserves its own paragraph. If you're just doing a couple spreadsheets here and there, it's not a real big deal. If you're working with project data where there are collections of many files, things get a bit stickier. Windows clients are typically going to be using CIFS to talk to a remote file server to pull data down. CIFS is a very chatty protocol that requires a lot of pre-game before the first bit of actual data is sent. When you're on a local network with sub-ms latencies, you'll almost never notice it. When you start sending that across the wide area, however, each one of those requests adds a few ms here and few there until you wind up taking 12 minutes to open up a CAD project (no exaggeration).
I typically direct customers to one of two products to solve these issues. My preferred solution is from an outfit called Panzura. They grew up in the engineering space and they've got a global file system controller that you can put at each location and it looks just like a Windows file server to the various clients. You create a share, map a drive to it, and then it works just like any other file server as far as the end users are concerned. On the back-end, you attach it to your favorite cloud provider and now you have a bottomless file server. As long as your cloud provider can keep allocating you additional storage, you'll never have a filesystem full message. The system also performs FIPS 140-2 encryption, deduplicates the data, and compresses the data before sending it to the cloud. This gets you the security you needs, provides a neat double-supeno protection, and reduces the amount of cloud storage that you need to pay for by typically 50%. You can also stop worrying about backup by having the system take snapshots of the data on a pre-defined schedule and storing them in the cloud as well. The appliances then talk to each other through whatever VPN or MPLS network that you have so that you reduce the time required to open any files by handling any of the CIFS traffic locally. Files that are modified in one location become immediately locked in every other location until the lock is released and then the new version of the file is available immediately, making it perfect for cross-site collaboration.
For smaller needs on a smaller budget, I like Copy.com (it's called CudaDrive now). That's from Barracuda Networks so it's all designed to be as simple as possible. It's kind of like the other filesharing apps out there, but it lets the administrator maintain control of the users and their data and it gives you access to shared filesystems stored in the cloud with the ability to have it as a folder in Windows explorer or through an app on your Android or Apple device. It's free for 15GiB per user and they have unlimited user plans for about $170/TiB/month. I'm a fan of Barracuda primarily because they've got a great focus on providing a top-notch product at a good price point designed such that you don't need a doctorate degree to manage.
Hope that helps.