Microsoft Partner Enablement – Hyper-V

There are 3 possible version types to deploy Hyper-V:

  • Server with GUI
  • Server Core (+ opportunity to install other roles)
  • Free standalone Hyper-V Server 2012 (Server core without other roles)

As a hypervisor all three of the options allow the same features and capabilities.


Traditional per server installation options include;

  • Deploy from DVD/ISO
  • Deploy from USB Stick
  • Network Deployment

Traditional Network deployment methods;

  • System Center Configuration Manager 2012 SP1 (SCCM)
  • Microsoft Deployment Toolkit 2012 U1 (MDT)
  • Windows Deployment Services (WDS)

With the traditional network deployments you have various benefits depending on the level of deployment you require, as an example;

  • The MDT allows a LiteTouch deployment which effectively means MANY of the options required to deploy and interact with (wizards) are automated during the deployment process.
  • The SCCM allows a ZeroTouch deployment which effectively means ALL of the options required to deploy and interact with (wizards) are automated during the deployment process.

The work behind the scenes varies massively between a LiteTouch and ZeroTouch deployment and will need to be considered when implementing, for example, a zero touch deployment would be a great solution in any environment, but this is really looking towards a massive deployment scenario, such as 50+ hosts, whereas a 10 user business may just benefit from a LiteTouch (for ease of redeployment) or physical deployment via media or USB.

The preferred Hyper-V deployment option would be to utilise Virtual Machine Manager – this utilises the SCCM and WDS capabilities but works around a Virtual Infrastructure.

Whichever of the network deployment options you decide on, be mindful that Microsoft Hyper-V is a stateful PXE deployment, at this moment in time VMware Auto Deploy is the only Hypervisor of the two that allows a Stateless option (introduced in vSphere 5.1).

At this point the course went into a comparison between Hyper-V and ESXi and the maximums for scalability, however I’ll summerise as we rarely reach these in any environment and we can reference them if needed.


  • Max 320 logical processors and 4TB physical memory per host
  • 1024 VM’s per host


  • 64 Physical Nodes
  • 8000 VM’s


  • 64 vProcessors and 1TB RAM
  • In-guest **NUMA supported (aligns with host resources to increase performance)

**NUMA – Non-Uniform Memory Access – As defined by Wikipedia –

Non-Uniform Memory Access is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors).

Support for Guest OS’s

Windows 8 – 32 Virtual CPUs

Windows 7/SP1 – 4 Virtual CPUs

Vista SP2/XP SP3/XP x64 SP3 – 2 Virtual CPUs

CentOS/Red Hat/SUSE/OpenSUSE/Ubuntu – 64 Virtual CPUs


Thanks for viewing and I hope this post was a helpful reference

Steve – Guru365

About apoth0r
IT Professional and Cloud Evangelist! & Streamer! Managed Services | Solutions | Procurement | Support Services | Cloud | Fujitsu

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: