IT Best Practices
13 May 2026
If your backup fails the first time you need it, it was never really a strategy, just a very expensive comfort blanket. Directors do not need more storage jargon; they need proof that critical data can be restored quickly, securely and without drama. Official guidance puts the emphasis on timely recovery, secure handling and regular testing.
If backups run ad hoc, the newest recoverable copy may already be out of date when an incident hits. The practical fix is simple: automate schedules and alerts, and set frequency around the data you cannot afford to lose.
Ransomware can spread to attached storage, and physical incidents such as theft or fire can wipe out both live data and the local backup together. Keep at least one copy offsite and one copy offline, digitally disconnected or otherwise resistant to destructive actions.
A backup that has never been restored is basically Schrödinger’s safety net. You cannot prove recovery time until you test it, so schedule restore tests for whole systems and individual files, and keep a simple record of results.
Some cloud services keep previous versions or deleted files for a short period, but official guidance says not to rely on those features alone for critical data. Keep an independent copy in another safe place or service.
Backups often contain personal and commercially sensitive information, so lost or stolen media can become a breach problem as well as an IT problem. Encrypt backups at rest and in transit, and manage keys separately so you can still restore data when needed.
If retention is too short, useful clean copies may no longer be available when corruption or malicious deletion is discovered. Set retention policies in advance, align them to your backup schedule and data types, and use soft delete or similar protection where available.
Manual processes are easy to miss, especially when staff are busy, off sick or halfway through a bank holiday weekend. Automate jobs, alerts and reporting, then manage by exception rather than hope.
A sensible baseline is the 3-2-1 rule: keep three copies of data, on two different media types, with one copy offsite. Then add encryption, automated schedules, documented retention, routine restore testing and, where possible, an offline or otherwise protected copy. If you are recovering after malware, verify the backup is clean before you restore it.
The comparison below is an editorial synthesis of the official guidance: local copies can help speed recovery, cloud copies improve physical separation, and no approach is ransomware-resistant by default without the right controls.
| Model | Pros | Cons |
| On-prem | Fast local restores; direct control over hardware and location | Needs strong physical protection and a separate copy elsewhere |
| Cloud | Offsite by design; scalable; can support stronger deletion controls | Still depends on good identity, configuration and monitoring |
| Hybrid | Combines quick local recovery with offsite resilience | More moving parts, policies and checks to manage |
Ask five questions: Do we know what is business-critical; can we prove the last restore test worked; is one copy offsite or offline; are backups encrypted, automated and retained long enough; and are cloud services covered by an independent recovery plan? Those are the questions that turn backups from a purchase into real resilience.
If you want a quick backup health check, we can review coverage, retention, restore testing and ransomware resilience without turning it into a three-hour storage lecture. Get in contact today