I recently hosted the first in a series of webinars for Peer Software, about their flagship PeerGFS or Peer Global File Service software.
Focusing on Edge Caching, we showed how storage footprint and cost could be reduced at remote or edge locations, where users only need to work with a smaller sub set of a larger corporate dataset.
In short, how a business can save good money by being smart about storing the right files that users need to work with, rather than allowing the sprawl of unstructured data files to waste disk space at the edge.
A good percentage of PeerGFS customers use it to synchronize large datasets between many different storage locations, in datacenters and offices around the world. Having multiple continuous data protection copies of files that are geographically dispersed makes sense for many reasons. Not the least of which is for redundancy and High Availability.
This model of distributed files is great from a user perspective. They can enjoy the performance benefit of working with a fast, local copy of the files, with PeerGFS synchronizing the files in the background to keep each storage location current, and up to date. What they don’t need to do now, is keep waiting, while files open from remote servers across a WAN connection or VPN.
For the business though, this convenience can have a downside. It needs to provide enough storage and infrastructure to host all of the files at each location. For many, this is acceptable, but for some organizations, the cost of hosting ‘all-files-everywhere’, is less so. Some users at remote offices or edge locations only need to work day to day, with a smaller sub-set of the full dataset. Imagine for example if you have a branch office where the users are only working on the files for one of the organization’s clients.
This is where the Edge Caching feature of PeerGFS can help. Edge caching automatically figures out which files the users are working on currently, which files have been worked on recently, and which haven’t been touched in a while. Using this intelligence, PeerGFS can keep the right files local at the edge location, so that they are immediately available to the users there, and can dehydrate or stub the rest of the files, so that they take up next to no space at the edge.
From the user’s point of view, they will still see the entire dataset. If they open a locally cached file, it will open as normal. If they open a dehydrated stub file, it will be rehydrated on demand and then opened. This means that storage footprint and cost can be significantly reduced, whilst keeping the most important files close to the users at the edge.
By reducing storage footprint and cost at the edge, PeerGFS helps to tame the sprawl of unstructured data, leaving the storage there to focus on the files that the users need, rather than wasting disk space to additionally host the files that they don’t.
There were some great questions that came in from the webinar audience, such as:
"Does PeerGFS work with Microsoft Azure?"
and
"What happens if the edge location loses connectivity, for let's say, 30 minutes?"
Come and watch the recorded session for the answers, and if YOU’RE interested in saving money for your organization, while at the same time, ensuring that users can enjoy the performance benefit of fast, local file access instead of complaining about how slow it is to use files remotely across the WAN, visit https://www.peersoftware.com for more information.
Spencer Allingham
A thirty-year veteran within the IT industry, Spencer has progressed from technical support and e-commerce development through IT systems management, and for ten years, technical pre-sales engineering. Focusing much of that time on the performance and utilization of enterprise storage, Spencer has spoken on these topics at VMworld, European VMUGS and TechUG conferences, as well as at Gartner conferences.
At Peer Software, Spencer assists customers with deployment and configuration of PeerGFS, Peer’s Global File Service for multi-site, multi-platform file synchronization.