Přeskočit na hlavní obsah

Veeam Community Forums Digest for dandvo [Jan 22 - Jan 28, 2018]

Veeam Community Forums Digest

January 22 - January 28, 2018

 

THE WORD FROM GOSTEV

All sales kickoffs have been completed, and Veeam Teeam is now globally ALL IN and well prepared for a great 2018 – while I personally am really looking forward to not seeing airports for the next few weeks, and spending most of my time on R&D matters!

Here's a good reminder for everyone using direct storage access transport mode for VMware backups, including backup from storage snapshots – based on one recent support case. When backing up a VM with heavily fragmented disks, you may observe one funky behavior if a VM has existing snapshots (in addition to the one created by Veeam for backup purposes). Basically, there will be a flood of Map Disk Region messages in the vCenter Server in this case – and our customers often open support cases to ask what's up with those, thinking this is some sort of a problem with Veeam. Especially because switching to other transport modes makes these messages disappear completely.

By itself, this behavior is totally normal: since backup proxy accesses the storage directly, VMware VDDK needs to request a lease for each backed up disk region from vCenter. However, if VMDK is heavily fragmented (as opposed to having just a few contiguous regions) - which is especially common for virtual disks that have grown over time – then you'll be getting A LOT of those messages. But curiously enough, this is not the case with other transport modes – because with those, backup proxy goes to the storage through ESXi I/O stack, which already "owns" the processed VMDK.

Now, you also won't see these messages when backing up from storage snapshots, because in this case we read VMDK data ourselves and directly from the storage snapshot of a datastore LUN, as opposed to using VDDK. However, in this case you may face another issue if virtual disks are heavily fragmented: the job will be spending significant time on "Collecting disk files location data" step for the given VM (10 minutes in the support case in question). Again, the reason is simple – sometimes, there are just way too many disk regions to process.

So, how do you fix bad virtual disk fragmentation? Luckily, it is super easy – all you need to do is perform Storage vMotion, which will effectively consolidate all of the VMDK chunks together. Unfortunately, Storage vMotion is not available in every vSphere edition – in this case, Veeam Quick Migration to the rescue! Quick Migration is a real gem – over the years, our users have shared so many use cases I haven't even thought of when adding this feature, that I kind of feel embarrassed by now. And best of all – being a part of Veeam Backup Free Edition, Quick Migration is completely free for anyone to use!

To finish off, a couple of important Spectre/Meltdown updates. First, Intel seems to have nailed the root cause of intermittent reboots of Broadwell and Haswell CPU based computers that started to happen after installing the initial firmware updates, and fixed the bug. So yeah, not cool – but on the other hand, how do you balance quality vs. delivery time with critical security issues like these? It's certainly a really tough call for QC management with all the concurrent pressure from everyone to release ASAP.

Second, Google claims to have invented a brilliant fix that unlike existing approaches does NOT come with a performance penalty – sounds too good to be true, but we shall see. One thing for sure – Google has really showed themselves as a thought leader in the industry, and arguably they are by far the biggest winner in this whole disaster. Just think about them vs. any other IT gorilla in the context of Spectre/Meltdown, and their respective roles! Yep, Google clearly demonstrated everyone which company has all the expertise and top scientists these days! And honestly, now I would not at all be surprised if a few years from now, public cloud is not all about Amazon and Microsoft alone.

 

BEST POST OF THE WEEK

Re: 6.8GB file restore job is going to take 16 hours??   [BY: Gostev • LIKED: 4 times]

Just wanted to add one comment on the original issue. Starting from 9.5, before restoring the selected files back to the original VM, we mount a backup to the mount server associated with the corresponding backup repository, and copy required files to the target VM directly - specifically to ensure that restore traffic is isolated to the remote site. The issue in this case is that instead of doing in-place Restore, you are using Copy To operation (which uses regular Windows file copy API running on the backup console, as opposed to our direct restore engine).

 

TOP CONTENT

SQL Server transaction log suddenly growing   [VIEWS: 214 • REPLIES: 6]

We've been backing up an VM running SQL Server for a long time, and suddenly the drive containing the datastore is losing space. It looks to me like the transaction log for just one of the databases has started growing. more

VMCE EXAM   [VIEWS: 204 • REPLIES: 4]

Hello,
The https://veeam.university/vmce/practice exam practice is less than helpful I have been trying to take this exam and know the responses are correct yet no score? I have tried multiple times to re-load the exam to try again and nothing seems to work. more

UDP traffic seen as DDoS attacks   [VIEWS: 189 • REPLIES: 5]

Hi'
We're actually using Veeam (9.5 last update) to backup external customer's servers to our dedicated Veeam server installed in a datacenter.
Customers use their fiber WAN link, we mount IPsec VPN between customers and our server. more

Veeam design for branch offiice   [VIEWS: 178 • REPLIES: 5]

Hello,
we are going to implement a new backup solution with Veeam, our current scenario is the following:
HQ with 5 esxi host in cluster with about 50 VM more

Complete fix/patch list for U3 ?   [VIEWS: 172 • REPLIES: 3]

anyone have it ?

 

YOUR CONTENT

None of topics you have contributed to have been updated this week.

 

Komentáře