So i decided to installed another Hyper-V server using Windows Server 2012 and i created another Exchange server to add to my DAG. I thought using .VHDX VM’s is great because of all the new features.

Upon connecting the newly created Exchange 2010 server to the existing DAG i noticed that it would not sync. So i dismounted the store, moved all log files to another folder, cleared out the “Passive Copy” folders and let the sync run again.

After 3 hours i had a “healthy” copy, but the CopyQueueLength and ReplayQueueLength would just not clear. I went and checked the log files and Exchange was complaining about a mismatch of the sectors.

I then realized that the newly created .vhdx works differently and has these features:

  • Support for virtual hard disk storage capacity of up to 64 TB.
  • Protection against data corruption during power failures by logging updates to the VHDX metadata structures.
  • Improved alignment of the virtual hard disk format to work well on large sector disks.

The VHDX format also provides the following features:

  • Larger block sizes for dynamic and differencing disks, which allows these disks to attune to the needs of the workload.
  • A 4-KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4-KB sectors.
  • The ability to store custom metadata about the file that the user might want to record, such as operating system version or patches applied.
  • Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks, and trim-compatible hardware.)

So i decided on windows server 2012 to convert the newly created VM back to.VHD, after doing this all the queues cleared in a matter of minutes and Exchange was as happy as it could be.

Discover more from COLLABORATION PRO

Subscribe now to keep reading and get access to the full archive.

Continue reading