Microsoft virtualization solutions. Microsoft Virtualization Solutions Low-level wrapper not running hyper v

Microsoft virtualization solutions. Microsoft Virtualization Solutions Low-level wrapper not running hyper v

26.07.2021

In this article, I will only describe the errors with which I personally encountered in the process of installing and configuring Hyper-V Server 2012. You can read about other errors and ways to solve them on the Microsoft website (for example, or, unfortunately, only in English).

Errors during installation.

IN.: At the final stage of the Hyper-V Server 2012 installation, or rather after the last reboot, the system does not boot - a black screen, no response to keystrokes, only hard reset helps, it is possible to boot into Safe mode.
NS.: OS does not support or is not compatible with USB 3.0 drivers.
R.: Disable the USB 3.0 Controller and all associated devices in the BIOS.

IN.: At the final stage of the Hyper-V Server 2012 installation, or rather after the last reboot, the system does not boot - a black screen, no response to keystrokes, only hard reset helps, loading into Safe mode is impossible.
NS.:
R.: Try the solution suggested by the author of this article.

Errors during setup and use.

IN.: The network adapter is not displayed in the Hyper-V Server Configuration console (item 8).
NS.: 1) No cable is plugged into the AC adapter;
2) Problems with active (switch, router, etc.) or passive (cables, sockets, patch panel, etc.) network equipment.
R.: 1) Insert the cable;
2) Check the performance of the network equipment.

IN.: When you try to execute a command in the console like netsh advfirewall firewall set rule group = “ ”New enable = yes the error message" Group cannot be specified with other identification conditions "appears.
NS.: The commands were inserted into the console using the copy-paste method.
R.: Enter the commands by hand, or simply erase and re-write the quotation marks.

IN.: Hyper-V Manager displays the "Access denied. Unable to establish communication between And "(Access denied. Unable to establish a connection between and ).
NS.: User is not granted remote launch and activation rights in DCOM.
R.: All manipulations are performed on the client computer:
1) Run the Component Services snap-in with full administrator rights. To do this, you can, for example, run the program% SystemRoot% \ System32 \ dcomcnfg.exe.
2) In the console tree, expand Component Services and Computers.
3) From the context menu of the “My Computer” object, select “Properties”.
4) In the My Computer Properties window, select the COM Security tab.
5) In the "Access Permissons" section, click the "Edit Limits" button.
6) In the "Access Permissions" dialog box, select HANONYMOUS LOGON from the "Group or user names" list.
In the “Allow” section of the “Permissions for User” section, select “Remote Access”.
7) Close all dialog boxes with the OK button.

IN.: Hyper-V Manager displays the error "Unable to connect to the RPC service on the remote computer" xxx.xxx.xxx.xxx ". Make sure the RPC service is running."

NS.: 1) Necessary rules have not been created in the firewall.
2) The hosts file does not have a one-to-one correspondence between the computer's IP and its network name.

R.: 1) There are 2 ways to solve the problem:

a) Disable the firewall on the client and on the server (undesirable).
b) Create rules in the firewall on the client and server by entering the following commands:
For remote disk management:
Netsh advfirewall firewall set rule group = “Remote Volume Management” new enable = yes
To run the firewall snap-in remotely:
Netsh advfirewall firewall set rule group = “Windows Firewall Remote Management” new enable = yes
2) To unambiguously bind the server name and IP address, you need to make changes to the hosts file. For example: 192.168.1.100 HVserver

IN.: Hyper-V Manager displays the error "The virtual machine could not be started because the hypervisor is not running." (The virtual machine cannot be started because the low-level shell is not running.).

NS.: There are various reasons for this error.

Hyper-V is an example of server virtualization technology. This means that Hyper-V allows you to virtualize an entire computer by running multiple operating systems (usually server-based) on a single physical computer (usually with server-grade hardware). Each guest operating system thinks (if the operating systems can think) that it owns the computer and has the exclusive right to use its hardware resources (or any other set of computer resources that the virtual machine has access to). Thus, each operating system runs in a separate virtual machine, with all virtual machines running on the same physical computer. In a typical non-virtualized environment, only one operating system can run on a computer. Hyper-V technology provides the computer with this capability. Before looking at how Hyper-V technology works, we need to understand the general principles of how virtual machines work.

Understanding virtual machines

A virtual machine is a computing environment implemented in software that allocates the hardware resources of a physical computer in such a way as to provide the ability to run multiple operating systems on a single computer. Each operating system runs in its own virtual machine and has dedicated logical instances of processors, hard drives, network cards, and other computer hardware resources. An operating system running in a virtual machine has no knowledge that it is running in a virtual environment and behaves as if it had complete control over the computer's hardware. Implementing virtual machines as described above means that server virtualization must be implemented in accordance with the following requirements:

  • Management interfaces
    Server virtualization requires management interfaces that allow administrators to create, configure, and control virtual machines running on a computer. These interfaces must also support software administration and run over the network to enable remote management of virtual machines.
  • Memory management
    Server virtualization requires a memory manager to ensure that all virtual machines receive dedicated and isolated memory resources.
  • Planning tool
    Server virtualization requires a scheduling tool to control virtual machine access to physical resources. The scheduling tool must be configurable by the administrator and be able to assign different priority levels to the equipment.
  • Finite state machine
    Server virtualization requires a state machine that keeps track of the current state of all virtual machines on a computer. Virtual machine state information includes information about CPU, memory, devices, and the state of the virtual machine (started or stopped). The state machine must also support the management of transitions between different states.
  • Storage and networking
    Server virtualization requires the ability to provision storage and network resources on the computer, which allows each virtual machine to have separate access to hard disks and network interfaces. In addition, desktop virtualization also requires the ability of multiple machines to access physical devices at the same time while maintaining consistency, isolation, and security.
  • Virtualized devices
    Server virtualization requires virtualized devices that provide operating systems running on virtual machines with logical representations of devices that behave the same as their physical counterparts. In other words, when an OS accesses a physical device from a virtual machine, it accesses the corresponding virtualized device, which is identical to the process of accessing a physical device.
  • Virtual device drivers
    To virtualize the server, you must install the virtual device drivers on the operating systems running on the virtual machines. Virtual device drivers provide applications with access to virtual representations of hardware and I / O connections, just as they would to physical hardware.
Below we will see that Microsoft's Hyper-V server virtualization solution meets all of these requirements, but first, let's look at the core software component that provides server virtualization - the low-level wrapper.

Understanding the low-level shell

A low-level shell is a virtualization platform that allows you to run multiple operating systems on a single physical computer - the host computer. The main function of the low-level wrapper is to create isolated runtimes for all virtual machines, and to manage communication between the guest operating system in the virtual machine and the underlying hardware resources of the physical computer. The term "low-level shell" (hypervisor) was coined in 1972 when IBM updated the System / 370 computing platform management program to support virtualization. The creation of a low-level shell was a new milestone in the evolution of computing, as it allowed to overcome architectural limitations and reduced the cost of using mainframes. Low-level shells are different. For example, they differ in type - i.e. by whether they run on physical hardware or are hosted in an operating system environment. Shells can also be categorized according to their design: monolithic or micronuclear.

Low-level shell type 1

Low-level Type 1 shells run directly on the underlying physical hardware of the host computers and act as control programs. In other words, they run on hardware. In this case, the guest operating systems run on multiple virtual machines located above the low-level shell layer (see Figure 1).

Because Type 1 low-level wrappers run directly on the hardware and not in the OS environment, they usually provide optimal performance, availability, and security over other types. Type 1 low-level wrappers are also implemented in the following server virtualization products:

  • Microsoft Hyper-V
  • Citrix XenServer
  • VMware ESX Server

Low-level shell type 2

Low-level type 2 shells run in an OS environment running on the host computer. In this case, guest operating systems run in virtual machines over a low-level shell (see Figure 2). This type of virtualization is commonly referred to as hosted virtualization. Comparing Figure 2 and Figure 1, it is clear that guest operating systems running in type 2 low-level shell platforms virtual machines are separated from the underlying hardware by another layer. Having an extra layer between virtual machines and hardware causes performance degradation on shell type 2 platforms and limits the number of virtual machines that can be run in practice. Low-level shells of type 2 are also implemented in the following server virtualization products:

  • Microsoft Virtual Server
  • VMware Server
The desktop virtualization product Microsoft Virtual PC also uses a Type 2 low-level shell architecture.

Monolithic low-level shells

The monolithic architecture of the low-level wrapper assumes that there are device drivers that support, reside in, and manage the wrapper (see Figure 3).

Monolithic architecture has both advantages and some disadvantages. For example, monolithic low-level shells do not require a controlling (parent) operating system, since all guest systems interact directly with the underlying computer hardware using device drivers. This is one of the advantages of a monolithic architecture. On the other hand, the fact that drivers have to be designed specifically for the low-level shell presents significant difficulties, as there are various types of motherboards, storage controllers, network adapters and other equipment in the market. As a result, manufacturers of monolithic low-level shell platforms need to work closely with hardware manufacturers to ensure that the drivers for these devices support the low-level shell. It also makes shell manufacturers dependent on hardware manufacturers to supply the necessary drivers for their products. Thus, the range of devices that can be used in virtualized operating systems on monolithic low-level shell platforms is much narrower in comparison with the situation when the same operating systems are launched on physical computers. An important feature of this architecture is that it ignores one of the most important security principles - the need for defense in depth. With echeloned defense, several defense lines are created. In this model, there is no defense in depth, since everything is done in the most privileged part of the system. An example of a server virtualization product that uses a monolithic low-level shell architecture is VMware ESX Server.

Microkernel low-level shells

Microkernel low-level shells do not require special drivers, since the operating system acts as the main (parent) partition. This section provides the runtime environment necessary for device drivers to access the underlying physical hardware of the host computer. Partitions will be discussed later, but for now, imagine that the term "partition" is equivalent to a virtual machine. On microkernel low-level shell platforms, installation of device drivers is required only for physical devices running on the parent partition. There is no need to install these drivers on guest operating systems, as guest operating systems only need to go to the parent partition to access the physical hardware of the host computer. In other words, a microkernel architecture does not imply direct access by guest operating systems to the underlying hardware. Physical devices are only accessed by interacting with the parent section. Figure 4 shows the microkernel architecture of the low-level shell in more detail.

Microkernel architecture has several advantages over monolithic architecture. Firstly, the absence of the need for special drivers allows you to use a wide range of existing drivers provided by the manufacturer. Second, the device drivers do not go into the shell, so it puts less load, is smaller, and is more robust. Third, and most importantly, the area of ​​potential attack is minimized because no extraneous code is loaded into the shell (device drivers are created by third parties, therefore they are considered extraneous code from the point of view of the shell developer). Agree that the penetration of malicious software into the shell and the establishment of control over all virtual OS of the computer is the last thing you would like to experience. The only downside to the microkernel design is the need for a special, parent partition. This increases the load on the system (although it is usually minimal), since the access of the child partitions to the hardware requires them to interact with the parent partition. A significant advantage of the Hyper-V microkernel architecture is that it provides defense-in-depth. Hyper-V technology allows you to minimize code execution in the low-level shell and pass more functions up the stack (for example, state machine and control interfaces that run higher up the stack in user mode). ). What is an example of a microkernel server virtualization platform? This is undoubtedly Microsoft Hyper-V, with Windows Server 2008 or later running on its parent partition.

Main features of Hyper-V

Below are some of the highlights of the original version of the Microsoft Hyper-V platform:

  • Support for various OS
    Hyper-V supports the simultaneous execution of various types of operating systems, including 32-bit and 64-bit operating systems on various server platforms (for example, Windows, Linux, etc.).
  • Extensibility
    Hyper-V technology has standard Windows Management Instrumentation (WMI) and APIs that enable ISVs and developers to quickly create custom tools and extensions for the virtualization platform.
  • Network load balancing
    Hyper-V provides virtual failover capabilities that leverage Windows NLB to load balance virtual machines from different servers.
  • Microkernel architecture
    Hyper-V has a 64-bit microkernel low-layer shell architecture that allows the platform to provide various methods of device support, additional performance, and security.
  • Hardware virtualization
    Hyper-V requires the use of Intel-VT or AMD-V hardware virtualization technologies.
  • Hardware Sharing Architecture
    Hyper-V uses a Virtualization Service Provider (VSP) and Virtualization Services Client (VSC) architecture to provide enhanced access and utilization of hardware resources (such as disks, network, and video).
  • Fast migration
    Hyper-V allows you to move a running virtual machine from one physical host to another with minimal latency. It does this through the highly available management tools of Windows Server 2008 and System Center.
  • Scalability
    Hyper-V supports multiple processors and cores at the host level, and extended memory access at the virtual machine level. This support provides scalability for virtualization environments to host large numbers of virtual machines on a single host. However, quick migration capabilities also allow scaling across multiple sites.
  • Symmetric Multiprocessor Architecture (SMP) support
    Hyper-V supports up to four processors in a virtual machine environment for running multithreaded applications in a virtual machine.

  • Hyper-V provides the ability to take snapshots of running virtual machines for fast rollback, which optimizes backup and recovery solutions.
All of these features are detailed in this roundup, but the most interesting features are those added to Hyper-V in R2. These functions are described below.

What's New in Hyper-V R2

New functionality has been added to the Hyper-V role in Windows Server 2008 R2. They improve the flexibility, performance, and scalability of Hyper-V. Let's consider them in more detail.

Increased flexibility

Hyper-V R2 contains the following new features that increase the flexibility to deploy and maintain a server virtualization infrastructure:

  • Live migration
    Hyper-V R2 contains a live migration feature that allows you to move a virtual machine from one Hyper-V server to another without interrupting the network connection, without user downtime, and without disrupting service. Moving is accompanied by only a decrease in productivity for a few seconds. Live migration enables high availability of servers and applications running on clustered Hyper-V servers in a virtualized data center environment. Live migration also simplifies the process of upgrading and maintaining the host computer hardware, as well as providing new capabilities, such as the ability to balance network load for maximum energy efficiency or optimal use of the processor. Live migration is detailed below in the "Working with live migration" section.
  • Cluster Shared Volumes
    Cluster Shared Volumes are a new feature of Windows Server 2008 R2 Failover Clustering. It provides a single and consistent file namespace that allows all cluster nodes to access the same storage device. The use of Cluster Shared Volumes is highly recommended for live migrations and is described below in the "Working with live migration" section.
  • Supports hot add and remove storage media
    The R2 version of Hyper-V allows you to add or remove virtual hard disks and pass-through disks on a running virtual machine without shutting down and restarting it. This allows all of the storage used by the virtual machine to be tuned without downtime to accommodate changing workloads. In addition, it provides new backup capabilities for Microsoft SQL Server, Microsoft Exchange Server, and data centers. To use this feature, virtual and pass-through disks must be connected to the virtual machine using a virtual SCSI controller. For more information on adding SCSI controllers to virtual machines, see the "Managing Virtual Machines" section below.
  • Processor compatibility mode
    The new processor compatibility mode, available in Hyper-V R2, allows a virtual machine to be moved from one host computer to another when their processor architecture matches (AMD or Intel). This makes it easier to upgrade the Hyper-V host infrastructure by making it easier to migrate virtual machines from computers with older hardware to computers with newer hardware. In addition, it also provides the flexibility to migrate virtual machines between cluster nodes. For example, processor compatibility mode can be used to migrate virtual machines from an Intel Core 2 host to an Intel Pentium 4 host, or from an AMD Opteron host to an AMD Athlon host. Note that CPU Compatibility Mode only allows VM migrations when the processor architecture of the nodes matches. In other words, AMD-AMD and Intel-Intel migrations are supported. Moving virtual machines from a host of one architecture to a host of a different architecture is not supported. In other words, AMD-Intel and Intel-AMD migrations are not supported. For more information on processor compatibility mode and how to configure it, see the sidebar “How It Works. processor compatibility mode ".

Increased productivity

Hyper-V R2 contains the following new features that can improve the performance of a server virtualization infrastructure:

  1. Supports up to 384 simultaneously running virtual machines and up to 512 virtual processors on each server
    With the appropriate hardware, Hyper-V R2 can be used to reach previously unattainable levels of server consolidation. For example, one Hyper-V host can host:
    • 384 virtual machines with one processor (significantly less than the 512 virtual processor limit)
    • 256 virtual machines with two processors (total of 512 virtual processors)
    • 128 virtual machines with four processors (total 512 virtual processors)

    You can also work with any combination of single-core, dual-core, and quad-core processors as long as the total number of virtual machines does not exceed 384 and the total number of virtual processors allocated to virtual machines does not exceed 512. These capabilities allow Hyper-V R2 to provide the highest density values ​​available on the market. virtual machines at the moment. By comparison, the previous version of Hyper-V in Windows Server 2008 SP2 only supported up to 24 logical processors and up to 192 virtual machines. Note that when using failover clusters, Hyper-V R2 supports up to 64 virtual machines per cluster node.

  2. Support for second level address translation (SLAT)
    In Hyper-V R2, the processor handles address translations in virtual machines rather than in Hyper-V code that programmatically performs table mappings. Thus, SLAT technology creates a second level of pages under the page tables of the x86 / x64 architecture of x86 / x64 processors through a layer of indirection from access to memory of the virtual machine to access to physical memory.
  3. When used with appropriate processors (for example, Intel processors with extended EPT pages from generation i7 or recent AMD processors with nested NPT page tables), Hyper-V R2 significantly improves system performance in many cases. The performance gains are due to improvements in memory management technology and a decrease in the number of memory copies required to use these processor features. Performance is improved especially when working with large datasets (for example, with Microsoft SQL Server). Memory usage for the Microsoft Hypervisor low-level wrapper can be reduced from 5 percent to 1 percent of total physical memory. Thus, more memory is available to child partitions, which allows for a high degree of consolidation.

  4. Vm chimney
    This feature allows you to forward TCP / IP traffic for a virtual machine to the host's physical network adapter. To do this, the physical network adapter and OS must support TCP Chimney Offloading, which will improve the performance of the virtual machine by reducing the CPU load on the logical processors. Support for TCP Chimney Offloading in Microsoft Windows appeared in versions
  5. Please note that not all applications may use this feature. In particular, applications that use preallocated buffers and long-term connections with large data transfers will benefit most from enabling this feature. Also, keep in mind that physical NICs that support TCP Chimney Offload can handle a limited number of offloaded connections that are used by all VMs on the host.

  6. Virtual Machine Queue (VMQ) support
    Hyper-V R2 provides support for Virtual Machine Device Queues (VMDq), Intel Virtualization Technology For Connectivity. VMQ transfers the task of sorting virtual machine data traffic from Virtual Machine Manager to the network controller. This allows a single physical NIC to appear as multiple NICs (queues) to the guest, which optimizes CPU utilization and improves network throughput, and provides improved traffic management for the virtual machine. After that, the host does not store direct memory access (DMA) data from the devices in its own buffer, because the network adapter can use this access to route packets to the memory of the virtual machine. Shortening the I / O path provides improved performance. For more information on the VMDq queue, see the Intel website at http://www.intel.com/network/connectivity/vtc_vmdq.htm.
  7. Jumbo frame support
    Jumbo frames are Ethernet frames that contain more than 1500 bytes of payload. Jumbo frames were previously available in non-virtual environments. Hyper-V R2 provides the ability to work with them in virtual machines and supports frames up to 9014 bytes (if supported by the underlying physical network).

As a result, it can improve network bandwidth and reduce CPU usage when transferring large files.

Increased scalability

Hyper-V R2 contains the following new features that increase the scalability of the server virtualization infrastructure:

  • Supports up to 64 logical processors in the main processor pool
    The number of logical processors supported in this version of Hyper-V has quadrupled over the older version of Hyper-V. This enables enterprises to leverage the latest models of large and scalable server systems to maximize the benefits of consolidating existing workloads. In addition, the use of such server systems makes it easier to provide multiple processors for each virtual machine. Hyper-V supports up to four logical virtual processors per virtual machine.
  • Support for parking cores
    The Core Parking feature allows Windows and Hyper-V to consolidate data processing on a minimum number of processor cores. To do this, inactive processor cores are suspended by placing them in state C ("parked" state). This allows you to schedule virtual machines to run on a single host rather than spreading them across multiple hosts. This has the advantage of moving closer to the green computing model by reducing the amount of power required by the CPU of the nodes in the data center.

Hyper-V and Virtual Server Comparison

The broad capabilities of Hyper-V have already led to the technology replacing Microsoft Virtual Server in many organizations that previously used Virtual Server for server consolidation, business continuity, testing, and development. At the same time, Virtual Server can still find application in corporate virtualization infrastructure. Table 1 compares some of the features and technical details of Hyper-V and Virtual Server.

Table 1. Comparison of components and specifications of Virtual Server 2005 R2 SP1 and Hyper-V R2

Component or technical data

Virtual Server 2005 R2 SP1

Architecture

Virtualization type

Hosted systems

Based on a low-level shell

Performance and scalability

32-bit virtual machines

64-bit virtual machines

32-bit nodes

64-bit nodes

Virtual machines with multiple processors

Maximum amount of guest RAM per virtual machine

The maximum number of guest CPUs per virtual machine

Maximum node RAM

Maximum number of running virtual machines

Resource management

Availability

Failover guest failover

Failover of host computers

Migrating nodes

Snapshots of virtual machines

Control

Expandable and scriptable

User interface

Web interface

MMC interface 3 0

SCVMM Integration

For More Information For more information about Virtual Server features and downloads, go to http://www.microsoft.com/windowsserversystem/virtualserver/downloads.aspx. For information on migrating virtual machines from Virtual Server to Hyper-V, see "Virtual Machine Migration Guide: How To Migrate from Virtual Server to Hyper-V" in the TechNet Library at http://technet.microsoft.com/en -us / library / dd296684.aspx.

Hyper-V , native to Windows systems - in its server editions, as well as in some desktop versions and editions - an environment for working with virtual machines and their guests OS does not always work without problems. One of these problems may be a notification that pops up when starting a virtual machine that, they say, Hyper-V it fails to start because some low-level shell is not running.

What is this error and how to fix it.

A window with such an error is a universal interpretation, the reason may lie in several things.

System requirements

If Windows itself does not meet the requirements for working with Hyper-V, and not all desktop editions allow working with this component, it simply cannot be activated in the system. But there are also hardware requirements. Their mismatch may not affect the activation of the hypervisor, but later become the cause of such an error.

For work Hyper-V necessary:

At least 4 GB of RAM;
64-bit processor with support for SLAT and virtualization technology.

BCD storage

The error in question may indicate a misconfiguration of the storage data. BCD... Component Hyper-V deeply integrated into Windows and starts before the system kernel starts. If in storage BCD changes were made to modify the launch of the hypervisor, they may be incorrect. Or launch Hyper-V and was previously deliberately disabled in order to temporarily optimize the use of computer resources. In this case, the configuration BCD in terms of starting the hypervisor, you must either correct or return the default value by setting autorun Hyper-V... To install autorun, open CMD as administrator (necessarily) , we introduce:

bcdedit / set hypervisorlaunchtype auto

After that, we reboot.

AMD Bulldozer

Hyper-V does not work with company processors AMD with architecture Bulldozer.

Virtualization technologies

To ensure the life of the virtualization environment through any hypervisor, the processor must be equipped with technology that provides virtualization - Intel Virtualization or else AMD-V... You can find out about the support for these technologies on the processor specifications page on the sites, respectively, Intel and AMD... And virtualization technology, of course, should be included in BIOS .

Another important nuance: for processors Intel in BIOS specific technologies must be disabled Intel VT-d and Trusted Execution... The built-in Windows hypervisor is not friendly with them. This is how the settings should look like. BIOS to work with Hyper-V: Virtualization technology is enabled and specific technologies are disabled.

With the advent of virtualization support in new operating systems from Microsoft, and even client Windows 7, 8 and 10, the proprietary Hyper-V service has ceased to be the lot of system administrators in mid-range companies. Hyper-V may well replace the same popular VirtualBox from Oracle in the field of entry-level (client-level) virtualization. However, before installing this service, you need to check that the system requirements are met, otherwise you may receive the following message: "The virtual machine cannot be started because the low-level shell is not running." What to look for when choosing hardware for virtualization. Is it possible to somehow save the situation if the hardware has already been purchased? Consider this in this post.
So, you have deployed Hyper-V on a Windows 2008 Server and when you try to start a virtual machine, you get a window

Do not despair, perhaps the situation can still be saved. It should be noted that the OS must be 64-bit, of course, you would not be able to deploy Hyper-V at all on x32. The first thing to do is to check the inclusion of the corresponding items in the BIOS - enable VT and AMD-V. Next, you need to make sure that your processor supports virtualization, the verification tools for Intel and AMD platforms are described as one of them. (in the picture below).

The utility from Mark Russinovich can also help in identifying.


Another common problem is the inability to launch virtual machines from Windows 2008 R2 on processors with Advanced Vector Extensions (AVX) technology support. This OS does not natively support AVX, however, in this situation, a fix can help you

Background

I put together a home computer 4 years ago that fit all my needs. I decided to save money on the processor - I took amd. There are no questions to the computer.

Then I started developing for Android and then a surprise awaited me! The emulator only ran on an intel processor. It could be run without hardware virtualization, of course, using this advice. www.youtube.com/watch?v=QTbjdBPKnnw&t=127s, but who used it knows that the emulator can run for a very long time. With 12GB it took me up to 10 minutes. This can of course be due to the built-in video card.

My main workplace was in the office, so I was especially worried and tested at home on real devices. But a couple of months ago, it was the emulator that became necessary. The first thought was of course to buy an intel-ovsky processor. But I also had to buy a motherboard and a video card. Most likely, I would have done this if I had not stumbled upon the updated system requirements. The requirements say that the emulator can still be run on Windows 10 (with updates after April 2018) using WHPX technology.

Now the main part of the story is how to do it. It turned out to be not so trivial. I apologize in advance for the omissions, because I cannot call myself an expert either in hardware or in Windows.

Instructions

After all the updates, the emulator naturally did not start. AndroidStudio was trying to start the emulator using HAXM and throwing an error “Emulator: emulator: ERROR: x86 emulation currently requires hardware acceleration!”.

Must support to work with hardware virtualization.

3. Remove HAXM:

4. We turn on the virtualization mode in bios. It may be called IOMMU, not VT.

5. Downloading updates for bios from the official site. For my asus, for example, they were.

The Bios version should become something around 3001:

7. Go to the microsoft website and study the instructions for enabling the component.

8. You need to check the Hyper-V requirements. To do this, type systeminfo on the command line. We check that these values ​​are displayed:

Instead, I had a message:

The official site says that until Yes-Yes-Yes-Yes is set, WHPX will not work. My emulator starts when the low-level shell is on.

In the Russian translation, the names are somewhat different:

By the way, after disabling the component “Windows Low-Level Shell”, “Hyper-v Requirements” become Yes-Yes-Yes-Yes. I did not understand this moment. If anyone understands, write in the comments.

10. Determine if we need all this? Or it was easier to buy intel)

After these settings, everything should work:

I would like to note that using WHPX technology and an amd processor, it takes about the same time to launch the emulator as on an intel processor. Considering that the rest of the "hardware" is comparable in its parameters.

© 2021 hecc.ru - Computer technology news