Monday, November 11, 2013

Why I would not buy a NETGEAR ReadyNAS

Recently I had the fortune to test a NETGEAR ReadyNAS. Now this review is rather harsh toward the ReadyNAS, so I say "fortunate" because I most likely would have not been exposed to this hardware in any other context, and within minutes of beginning work on managing these devices it was clear to me that I would not use my own money to buy one. I'll detail why in just a moment.

For those not familiar with Network Attached Storage (NAS), a quick explanation would be that a NAS is like a hard disk that connects to a network. In addition to simple storage, a network-attached disk like this usually offers some other software features like all-inclusive backup, remote access through VPN, and media sharing. In summary, a NAS is a storage appliance; a place to store data that is plug and play.

This disappointing design and functionality of the ReadyNAS made me chuckle a bit, because I've been regularly called by NETGEAR marketers about selling ReadyNAS as part of my solution portfolio. I've explained to them that I am a QNAP and Synology guy, but they persist, attempting to answer my questions and assure me that their product does everything the QNAP does, and they act interested in my suggestions to improve the NETGEAR line. I don't know, from my perspective, the ReadyNAS is a solution beyond hope, and I feel that NETGEAR should just drop the product line. They are not a storage company.

http://www.theinquirer.net/inquirer/review/1014485/netgear-storage-central-killed-pcs

Introduction

My exposure comes from a job site where I was responsible for integrating a Philips Ultrasound system called Xcelera. As part of this solution, Philips uses 2 stock ReadyNAS Pro 4 to store and archive patient studies, which is basically all of the Ultrasound and analysis data that is generated.

As a sysadmin I am tasked with checking that studies are appearing on the NAS and subsequently copied to a second NAS.

User Interface

Firstly, the firmware for the ReadyNAS is called RAIDar. It has a web interface which I found to be quite poor functionally when compared to current firmwares from QNAP and Synology. The design and interactivity of I would estimate to be somewhere in the 2003 era, even though the firmware is dated 2012. While I may not be a fan of the "Desktop in your browser" mentality of the Synology and QNAP firmwares, at least they are featureful, perform well, are easy to use, and interactive in their feedback.

No Link to Admin Mode

Also, the first thing you do when you get to the RAIDar login page is realize that even if you log in as admin, there is nothing to configure! It's not readily apparent that you must use a separate URL for admin, otherwise from the base URL you are directed to a simple file manager page. There should be a link from the file management interface to the admin area.

https://netgear/shares/ = generic file manager
https://netgear/admin/ = admin area

File Manager

Since the file manager was one of the first things I saw in RAIDar, here's what it looks like:
Very rudimentary indeed. Here's the QNAP equivalent, which includes a nicer multi-pane view including previews, and search.


Share Creation

Creating shares is done through this form on the ReadyNAS.
There is nothing dynamic about this page unfortunately. But I can say that the bulk-operations aspect is somewhat appealing, on the QNAP you must go through 7 steps of a wizard to create a single share, but at least on the QNAP you can assign permissions on a share during creation. On the ReadyNAS, you must first create a share, then switch to manage permissions.

Audit Features

In the process of integrating the ReadyNAS Pro 4 into my auditing and logging framework, I quickly realized that remote logging is not supported. This leads me to conclude that the ReadyNAS and RAIDar are not auditable and should not be used in secure environments. This seems to be ignored in practice, as in this case the device is being used in a medical context. I suspect anyone using ReadyNAS at a PCI-DSS or HIPAA compliant site is ignoring this major shortcoming and possibly not being forthcoming with their suppliers, managers, and customers.

The interesting thing is that NETGEAR has shipped syslogd, which can send logs to a central logging host, however, you must log into the NAS via SSH (and possibly get denied future support) to get it to work!

The ReadyNAS forum has a particular thread started in 2008 where a user requests syslog functionality. It's 2013 as of this writing and it still has not been implemented.

In contrast, the QNAP can both log to remote hosts and itself act as a syslog server, and even send email alerts on syslog events, all without touching a config file! Very slick.


Add Ons & Community

The market for add-ons on the ReadyNAS is relatively poor. In addition, I could not get the market listing to load on my ReadyNAS, I needed to download packages manually from NETGEAR's website.

In contrast, what QNAP has done is taken great Open Source projects and ported them to work on the Turbo NAS series. In fact the newest Turbo NASes include HDMI output and QNAP went to lengths to port XBMC, a best-of-breed media center package, to the QNAP NAS, rather than doing something silly like writing their own solution. The included apps on QNAP are also quite nice, like OpenVPN, MySQL, ClamAV, and Photo, Video, and Music apps that let you access content through a web browser without the complexities of VPN.

The QNAP forums and QNAP wiki are also much more active and complete, respectively, than the NETGEAR ones.

Apps Market

This section of the post is relevant mostly to consumers and prosumers, as remote access to a NAS in a corporate environment is not a feature most companies are looking for.

However, I think that an active set of mobile apps shows a company's commitment to current technologies and the competency of their development team.

Let's compare!

Here is the NETGEAR app list for Android. Very poor reviews! The icons don't even match. It seems either the apps don't work, or people are having trouble using them. 2-star averages.
And here is a list for Synology. At least all of their icons are consistent looking. There are a lot of apps! Generally 4-star reviews.
And here is QNAP. The reviews are generally favorable, and most icons are consistent.
And Thecus. Not many apps, but maybe features are consolidated in each app.










In summary

In short, my experiences with NETGEAR have shown them to be a lackluster company when it comes to quality control and feature requests. They do seem to fix their bugs.

My experience with NETGEAR support on a VPN firewall I have from them also convinced me that NETGEAR is interested in selling hardware, but not interested in supporting it.

I would not recommend NETGEAR ReadyNAS, or their other products to any of my clients, save for network switches or any of their other products that do not have complex firmware or functionality. As far as I'm concerned, NETGEAR is a simple network gear company that excels only at creating "dumb" devices. They should  stick to their niche.

The positives:
  • Nice little handle at the back of the machines to make them easy to carry
  • Build quality is professional
  • NETGEAR donates to netatalk, the FLOSS project that develops the AFP functionality in all NASes
  • Some cool addons @ http://www.readynas.com/?cat=75
  • Batch adding of shares

The negatives:
  • Clunky user interface
  • Lacking advanced features
  • Threatening disclaimers when trying to get root shell access
  • Poor community involvement and community documentation
  • Poorly rated mobile apps
  • No syslog auditing for compliance, a critical enterprise feature

If you are looking for an extensible, feature-rich, easy-to-use NAS, with a great support community, and enterprisey features, look elsewhere.

Friday, October 4, 2013

What do I name my server? (server naming guidelines)

Best practices

For more ideas on server naming, I really like this article, and the handy abbreviations he has come up with.
<location><role><sequence>

Example for some database servers in Ontario (ON = Ontario, SQL = SQL Server):
ONSQL01
ONSQL02

Drawbacks of the "smart" naming approach

The one drawback of this approach is that server roles are often changing, and installing a new service would could render the function of a server to something completely different. Many services will refuse to work properly if the server is renamed, and this is often hard to predict, so the approach should always be "don't rename servers" in my book. There may be certain roles that never change, i.e. things like Active Directory, where its fairly certain that an AD server will always be an AD server.

The second drawback of such a naming scheme is that the names can get so unwieldy that you may as well have gone with servers named after Looney Tunes characters and simply looked them up in a name -> role table.

Commentor "Bish" in the above reference blog post at retrohack.com writes:
My company is the embodiment of your plan. USWAFTP104, UKLODC62, etc. It is the hell anyone would instantly recognize after 12 seconds of consideration.
Which machine was that? Do you mean USMIESX4H2 or was that USMIESX2H4 that you just shut down? Oh no! The amount of times that someone remotes (ssh, rdp, etc) into a box and – even if they double-check the number 6 times – shuts down, patches or reboots the wrong host is astounding.
A lot of my clients are using UNC paths (\\) to launch applications and services, and trying to explain to a user to open a Run window and type USMIESX4H2 as the server name is just very unfriendly.


Because of this, it may be wise to implement completely "agnostic" names such as those listed at namingschemes.com. Of course, naming all of your servers after ships that appear in Star Trek may is probably a logical approach (thanks, Spock), but your resident boss may think they are too childish. In my opinion this plays a factor only if your organization is client facing, ie, "Hi Mr. Schneider, yes, log into your client portal on daffy.procompany.com".

My picks today are (appropriately nerdy):
  1. Car parts (pedal, clutch, wheel)
  2. Chess pieces (rook, queen, pawn)
  3. Periodic table elements (copper, chromium, hydrogen)
  4. Planets (mars, jupiter, saturn)
  5. Greek alphabet (alpha, beta, though there are some unfriendly names like 'mu'), this is closely related to the UN/NATO Alphabet.
Take Greek Gods for example, you can name Athena, and expand this either numerically, like 'athena1', 'athena2', or geographically like 'athena-west' 'athena-east' and still have some semblance of user friendliness. However, it should be noted that some common practices include making regional servers part of their own DNS subdomains such as athena.west.example.com and athena.east.example.com, but conversely many DNS naming documents suggest NOT having the same hostnames for any servers even if they are in different DNS domains!

What Microsoft Says

And of course, what  post on this blog would be complete without poking a bit of fun at Microsoft for going the exact opposite direction in recommending non-industry standard naming conventions. I like this gem:
"Identify the owner of the computer in the computer name."
from KB909264 at Microsoft's site.


No! No! No! This is some terrible advice! I run into this all of the time, where a client lets an employee go and wants to change the name, or confusion rising out of "Fred" using a PC called "TomPC1". Absolutely couter-intuitive.

Not to mention that some well-known RFCs discourage this naming convention, such as RFC1178.


Not only that, but I've had clients insist that the name of their computer be changed, even though for most intents and purposes, the hostname is never seen and is irrelevant to their job function, but it just feels wrong, because "Tom" stole from the company and "I might get some bad juju from being associated with him in any way." Then I need to explain that changing the name may break some services in an unpredictable way somewhere down the line, and in the process see the user get the glazed-over look.

For client PCs

I settled at first on desktop-XX for desktops then notebook-XX for notebooks as my company's naming conventions for clients (under 100 employees), and later I dropped the hyphen because it's unnecessary.

So there you have it.

Monday, September 30, 2013

Hybrid ISO disc images - CD/DVD or USB boot compatible!

Optical drives going out of style

Due to the lack of optical drives on modern netbooks and ultrabooks, it's often necessary to convert optical disc media into bootable USB keys. Normally this process is fairly difficult.Tools like unetbootin claim to do this easily. Personally I've never had good luck with this software; it's cumbersome to use and trying to address a problem that should be solved by the vendor distributing discs and ISO images.

A technology I'm rather fond of is the Hybrid ISO, which is an ISO format that can be directly burned to DVD or written directly to a USB disk in a straigtforward way in Windows, Linux, or Mac OS.

It's as easy as
dd if=yourISO.iso of=/dev/sdb

Where /dev/sdb is your USB key.

Here is a small list of distributions that currently offer hybrid ISO images that can be written to DVDs or USB sticks:
  • SuSE Live KDE
  • SuSE Live
  • SolusOS
  • ArchBang
  • CrunchBang
  • Arch
  • Linux Mint
  • Ubuntu
I'd love if they did this with Windows install images as well!

What are the pitfalls of hybrid ISO images? Why doesn't everyone distribute bootable images this way?

Xerox and Konica-Minolta administrator web interface lockout when control panel in use

I've worked with a fair number of all-in-one professional copiers for business, such as Xerox Workcentre, Konica Minolta BizHub, Canon imageRunner, and Ricoh Aficio.

In this new area of network-everything, a strange area emerges where traditional office appliance service companies are offering devices that are heavily IT-integrated, and yet they do not know the intricacies of managing devices in this way. Therefore I find myself taking administrative control of these appliances from an IT standpoint and letting them manage the physical hardware.

It is super convenient to be able to login to a web UI and manage a diverse set of options on these devices. However, during my troubleshooting, I've run into an issue with more than one manufacturer where you cannot change settings in the Web UI when the printer is processing a job or a user is at the front panel.

The Konica-Minolta front panel is exceptionally bad: it will prevent a user from making copies if it detects ANY activity in the web UI, and the time out is quite long.

The Xerox machines, while quite fast--and with by far the most configurable options--are plagued by the opposite effect, where the front panel overrides the web UI administrator. You cannot apply any settings when the printer thinks it has a user at the front panel. In my experience this detection mechanism is very poorly written and the user will insist that they are not at the control panel even though the web UI reports this. Add to this that the timeout before the printer thinks no one is working is set very long, and cannot be changed in the web UI itself and you get an unproductive 15 minutes for a task that should have taken 2.

This must be frustrating for administrators working in universities where a library printer may never not have a user at the keypad. I also work with medical clinics where the machine is in use 98% of the time during working hours.

When I get a service call for one of these machines, like when the scan-to-email function is not working, my first goal is to use VPN or SSH tunneling to get at the device's web UI to check the settings. And having your hands tied like this is immensely frustrating, especially when it interferes with even benign operations like adding an address book contact. The device manufacturers should know better!

Considering cheap Brother devices do not have such interlocks on web UI administration, I'm hard pressed to reward Xerox and Konica-Minolta in particular with kudos, despite how nice the devices themselves are.

I expect the device to trust me as a sysadmin to change settings while the device is in use. It's just poor or lazy design.

Perhaps some of these settings can be changed over SNMP which gets around this limitation?

As a side note, my personal picks are Ricoh Aficios and Xerox Workcentres. However, I do not know if the Aficio line suffers from this interlock problem.

Good luck to all you sysadmins out there managing office printers!

References

http://forum.support.xerox.com/t5/Hardware/Unable-to-enter-Administrator-Mode/td-p/740

Wednesday, April 17, 2013

Windows Gripe #38

The built-in arp command on Windows will not take a hostname as an argument, you must specify an IP. In contrast, Linux will let you specify a hostname and it will do DNS to get an IP and then report the MAC of that IP, eliminating a step.

Perhaps Microsoft did this because conceivably you could have a machine with 2 or more IP addresses, and Windows wants to make Really Sure™ that you want the MAC address of that particular IP.

This will be the first article in which I mention Windows' "Really Sure™" technology. You know, the one where you have to click 12 times to get through an install Wizard, and you are asked 2x if you really, truly, deeply want to delete that file. If you don't trust yourself, or you have a multiple personality disorder, Windows Really Sure™ will save your bacon.

Or it'll result in you and your friends blindly clicking any "OK" "Yes" "Confirm" and "Next" button that you see.

Windows Really Sure™Technology


This will be the first article in which I mention Windows' "Really Sure™" technology.

You know, the one where you have to click 12 times to get through an install wizard.

Or you are asked 2x if you really, truly, deeply want to delete that file.

If you don't trust yourself, Windows Really Sure™ will save your bacon.

Or it'll result in us and our friends blindly clicking any "OK" "Yes" "Confirm" and "Next" button that we see. I think it's the more the latter.

Sunday, April 14, 2013

Google Maps mobile app double tapping

Have you ever gotten frustrated about not being able to use Google Maps with one finger? Think it's impossible to zoom in or out without two-finger pinch?

Try tapping twice quickly and hold down on the second tap. Drag that finger toward and away from that point and the map zooms accordingly.
Another hidden feature!

Sunday, March 17, 2013

Microsoft attacks Google for snooping - with ridiculous campaign

http://www.scroogled.com/

I really don't like the idea of giving them any more traffic, but it's just too funny to not link to.

The picture of the woman doing the "mea culpa" pose over the man makes me think she just knocked him down--and told him to stay down. He looks surprised. Really weird graphic.

 

Eyes superimposed over the email

This is a total red-herring.

Firstly, Google does not have people actively reading your email to give you ads.

Secondly, is your email really that private that you would be bothered by someone reading it? Everyone should already know that you do not send personal info through your email. I know most are guilty, but seriously guys, no passwords, no credit card numbers, no comments about the diseases you have. E-Mail is not private.

In fact, I would tell people to always imagine creepy eyes above their screen because it will give them pause to think about 1) "should I be sending this info via insecure email", and 2) "maybe I should cool down before I send it".

I think that most people are willing to have their email scanned in exchange for the service that Google offers. I know I am, GMail is pretty awesome.

Is this really worse than the way Hotmail (now Windows Live Mail, or Outlook.com, I can't remember) used to insert ads under the sig in any email sent from Hotmail. That was pretty sleazy and made your @hotmail.com address look even less professional than it already did.


Somehow you are being screwed

I'm not being screwed by Google. Maybe I'm being inconvenienced with ads...but again, a small price to pay. Let's not forget that Google offers GMail, YouTube, and even Blogger, free of direct fees.

Link to Google's own video

In the Microsoft campaign they include a video snippet from Google, where Google talks about this 'feature'.

Microsoft is trying to pitch this as a negative, but honestly, it puts Google in a pretty good light; in the video the CEO is actually citing how Google aims to not intrude so much so as to be creepy. It's actually reassuring that Google is thinking about where the line of creepy is.

Product Pick: High-Rely 2-bay AMT disk to disk backup appliance

A great majority of my worrying for my clients comes in the form of backup disaster preparedness. It happens far too often in my line of work that companies don't grasp the idea of how a few hundred dollars now, will save them (and me) countless hours of worry, lost productivity, lost money, and extreme anxiety.

I don't think I'm alone as a sysadmin in feeling guilt about saying to a client "Sorry, there's nothing that I can do." I feel like it's my fault...sigh.

Hardware is replaceable, data is not. One cannot make up data; it was created for a purpose, you pay your employees to create it, and your customers are expecting you to deliver it. If you need to make a shipment tomorrow and the important details were in an Excel file that was lost by a failed hard disk, all the money in the world won't bring it back, and more importantly, even if it did, would it be recreated in time for your deadlines?

Please, backup, and test that you can get the your data back. See some of my other posts for software hints for packages that can help with this.

The Dinosaur that Won't Die - Tape

Sysadmins like myself really dislike tape backup, and this has been the 'go-to' technology for the last, say, 40 years? Linear tape really is an old-fashioned media. It is reliable, but difficult to manage. Since the data is stored in a long stream (linearly) along the tape, individual file restores are very time consuming. The fact that that tape is so cumbersome to write to means that you need a big backup software package to control what goes onto each tape to manage the indexes of which files are located where. The indexes themselves are not usually stored to tape, therefore they have to be recreated if you lose your server, which means manually spending hours feeding tapes and letting your backup software re-index all of the files on each tape.

Then of course there's the human factor issue with tape, where you have to train people to rotate tapes, keep to the schedule, report errors and possibly interact with the server to see status messages (dangerous!). This user(s) may also become "forgetful" and not take the tapes off-site or will forget to bring them back on-site to actually update the tape data. These elaborate retention and rotation schemes are just asking for trouble, and the incremental nature of most tape backups means that you have to have all of your tapes to get all of your data back, the most recent tapes will only contain recently changed files. Yikes!

Another thing to consider is that your disaster recovery plan probably also includes contingencies for if your building burns down. Unfortunately, tapes are no good without a compatible tape drive. They require special hardware to be read. Tape drives run from $2000-$4000 new, and aren't something you can pick up from your local computer fix-it store. This will delay your server/data restoration, unless of course you purchase an extra tape drive that waits and depreciates in an off-site closet for an event that hopefully will never occur.

So you have a lot of things working against you with tape:
  1. Specialized hardware
  2. Recreation of indexes
  3. Inability/difficulty in restoring
  4. Incremental backups
  5. Labourous and error-prone physical management of media
And yet, even Google, in a data loss incident a few years ago, went back to tape to restore data for the 2% of GMail users who had lost emails due to a software update!

While tape is good enough for Google, I believe they have the money for nice automated tape libraries (think "tape jukebox") and dedicated personnel to manage their tapes. Most SMBs don't.

Here I'm going to talk about disk-to-disk backup, because for most of my clients, their Internet connections are not beefy enough to do real on-line backup, and furthermore most online backup houses will not ship you a hard disk when you need to restore everything, you have to download it all. For businesses that I work for, that have average of 1TB of data on shares and Outlook PSTs, that's just not reasonable.

So I recommend disk-to-disk backup to all of my clients.

More on Disk Backup

Disk-to-disk backup is as simple as it seems. You simply use another hard disk(s) to backup the hard disk(s) in your computers and servers. We are up to 4TB density on 3.5" hard disks these days, so data density is very good, for a competitive price. In combination with modern block-level backup (instead of file-based like tape) only the changes to data are stored, and we can do these differential block-based backups granularly on the disk because hard disks are made for random access, unlike tape.

In my own business, I first I tried USB external drives with clients. USB drives always had to be replugged, would spontaneously change drive letters, would have their internal boards fail (Western Digital, I'm looking at you), and people would have this nasty habit of knocking them over while they were running, which would just toast them. On top of that the USB connectors would be destroyed by constant replugging, as they are really not designed for a high number of replug cycles.

Then I looked at purpose made devices like RDX drives. These seem like an intelligent solution until you notice that you need custom software and have to buy media from the manufaturer at inflated prices at storage capacities that are 12-months behind the storage curve. Plus RDX drives were 2.5" meaning they are reduced capacity to begin with, and the portability argument of 2.5" drives over 3.5" wasn't convincing to me.

How could we combine the reliability and robustness of tape with the random access performance and low price of commodity hard disks?

Highly Reliable Systems to the Rescue

After scouring the Internet for a few hours I came upon a company based in Reno, Nevada, making a wide range of devices including a nice appliance that houses 2 x 3.5" standard SATA drives in a nice hot-pluggable configuration. For extra usability they include LCD displays and LEDs right on the device to tell the end user the status of drives and replication.



They call it the High-Rely 2-bay AMT. I call it common sense.

The way it works is that one drive always remains in the AMT. To the OS, the AMT looks like any other eSATA drive. The swapping of drives is not visible to the OS, meaning no problems with backups being missed because drive letters change or because something wonky happened on the USB bus.

The trays are nice and beefy, and on High-Rely's site you can see them chuck a tray with disk inside off of a roof and then demonstrate that the hard disk still functions perfectly.

Because the High-Rely looks like to the host system as a regular fixed disk, the compatibility with backup software becomes ten-fold better than tape, RDX, or USB. You can use Windows' new inbuilt backup to great effect. Not only that, but you can use it in 'exotic' scenarios like hooking it up to a NAS or SAN and do seamless backups of those devices as well!

And what does the user do to manage the High-Rely? When they come into work, and both drives are in green state, just unlock either drive and remove it, insert another one, and watch the lights furiously blink until replication is complete. There is always one drive off-site, just like we IT people like.

All of the RAID1 replication is done inside the High-Rely, so there is no load to the OS, and no management of RAID or the replication process. However, it should be noted that doing a backup to the High-Rely while it is replicating between drives will increase the time until both drives are ready, and will slow the backup job as well.

The sleds are aluminum, with an LCD in front and a hot-swap connector on the back. They don't use the drive's actual SATA connector, to reduce the likelihood of damaging it through continued replugging. There are four screws holding the disk in the caddy, so you can remove and replace the drive with an higher-capacity one down the line. I asked High-Rely support about this and surprisingly they didn't threaten me with claims of invalidating the warranty, they actually laughed and said that's the point! When you need to recover, you can simply take the SATA drive out, connect it to a computer and get your data. What a concept!

Oh, and with Windows Server Backup (wbadmin.exe) you get unlimited retention (until disk is full), so each disk contains weeks of revisions of every file on your server. One client of mine has 800GB of data, and with daily "full" block based backups gets 12 days worth of complete snapshots on each 2TB sled. Very cool.

Some Issues

The one thing I can complain about is optional host software is silly looking. I use it because it offers features like email notification and visual status separate from the front panel.


I called their HQ and spoke to the owner and asked him about integrating other software with this communication channel over SATA and he said the Chinese company that makes the RAID solution used in the High-Rely will not disclose the way that they are sending info over the SATA, so we're stuck with this.

It seems that after a power failure, the High-Rely goes into a state where it doesn't know how to replicate anymore. There is a process I have to go through every months with a client to get it back to normal. It is a simple fix, and no data is lost, I don't have to reconfigure or reset backups. With a proper UPS and stable power grid this would normally never happen, I think.

Summary

So, in conclusion, I always recommend this solution to clients if they have lots of data (over 500GB), and they are willing to shell out the approx. $900 for the appliance and 3 sleds with drives.

High-Rely also offers some more sophisticated NAS-based devices that have large swappable cages that have 3 drives in them. That's 3x4TB, or 12GB, for those in the graphics or video industry.

Wednesday, January 16, 2013

Can't suspend backups with wbadmin

Did you know that Windows Server 2008 and later include a new wbadmin.exe that replaces ntbackup.exe.

Thank goodness, right? Because ntbackup.exe fostered the creation of a whole backup software industry based on its inadequacy as a backup tool.

Having used wbadmin.exe, I have to say, I was initially impressed with Microsoft's renewed commitment to bringing universal OS and server features that should be part of the OS back under the auspices of the OS designer.

The fact that basic backup had to be outsourced to a 3rd party software vendor was silly in the first place. Linux for example includes LVM2 to do snapshotting already, and so many adequate backup tools like rsync, rsnapshot, dd readily available.

I'll go into wbamin in another post but for the time being I wanted to point out that while testing other backup software it's impossible to suspend wbadmin from doing its thing.

Even in the CLI, you simply can't pause backups. Look in the Task Scheduler, and you'll find no mention that wbadmin is even set to run at all!

In fact the only way to preemptively cancel Windows Server Backup is to delete the entire backup schedule, at which point your previous backups are kept until you go to start the schedule again and Windows will format and delete your accumulated backups on the fixed disk you have been using! Talk about unintuitive and dangerous!

So basically you can't suspend backups at all with wbadmin. Your only option if you don't want to lose your backup history is to wait until the scheduled job start, and issue 'wbadmin stop job' at the CLI, and you must remember to do this shortly after every scheduled backup starts!

Here are some pointers to MS:
  • Include webadmin in the Task Scheduler
  • Allow suspension of backup plan in the GUI or at least CLI
    • and as an extension, don't delete the previous backups and require running the backup wizard from scratch again
  • Warn the user with a taskbar warning or something when backup is suspended

And for you, faithful sysadmin, keep on your toes, make sure wbadmin has got your back.