Windows 10 UEFI Physical to KVM/libvirt Virtual












1















Original Post



I am migrating my PC from Windows 10 to Linux. There are a few things for which I still need Windows, and I am currently dual-booting, with Windows and Linux on separate physical disks. I'd like to get away from dual-booting, and run my Windows 10 installation virtualized under KVM+libvirt+qemu.



The tricky part here seems to be that my Windows 10 install was done through UEFI (with GPT partition table), rather than legacy BIOS MBR. Here's what my Windows disk looks like:



$ sudo parted /dev/nvme0n1 print
Model: Unknown (unknown)
Disk /dev/nvme0n1: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 524MB 523MB ntfs Basic data partition hidden, diag
2 524MB 628MB 104MB fat32 EFI system partition boot, esp
3 628MB 645MB 16.8MB Microsoft reserved partition msftres
4 645MB 500GB 499GB ntfs Basic data partition msftdata


Since it was setup as UEFI, it seems there are some extra steps needed to virtualize, as libvirt doesn't appear to support UEFI out of the box. What I tried was to export each of the above partitions as a qcow2 image, with a command like this:



$ qemu-img convert -f raw -O qcow2 /dev/nvme0n1p1 win10_part1.qcow2


And repeated for all four partitions. Then I created a virtual machine under virt-manager, importing all four of the qcow2 drives. I installed the "ovmf" package for my distro (Manjaro), and added this line into the virtual machine's XML config file, in the "os" section:



<loader type='rom'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>


When I boot the virtual machine, I see the TianoCore splash screen. But it just drops me into a grub2 shell, rather than finding the Windows bootloader.



I also tried booting this VM from the Windows 10 install ISO, hoping that I could "repair" the system to boot. But that did not work.



I'm sure I'm missing something. Even better would be to convert this to MBR boot, just to avoid the OVMF dependency.



Edit/Update...



Per Dylan's comment, I did get it working, but a number of small issues came up along the way, I thought I'd post them here in case others have similar issues.



First step was, as Dylan wrote, to create an image of the whole disk, rather than individual per-partition disks. I used this command:



qemu-img convert -f raw -O qcow2 /dev/nvme0n1 win10_import.qcow2


I then created the virtual machine in virt-manager, specifying the above disk image ("win10_import.qcow2") as my drive.



Next was to use the OVMF (TianoCore) UEFI firmware. This was done by installing the ovmf package ("ovmf" on Manjaro), then adding it to the virtual machine's XML definition:



  <os>
<type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
<loader type='rom'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>
</os>


After that, Windows would still crash during the boot, with a bluescreen and the error "SYSTEM THREAD EXCEPTION NOT HANDLED". For some reason, it didn't like the "Copy host CPU configuration" CPU setting. I changed to "core2duo", and that booted. Right now I'm using "SandyBridge" and that also works. (For what it's worth, I did create another, separate Win10 VM doing a fresh install from scratch. That VM did work with the "Copy host CPU configuration". My CPU is AMD Ryzen 5 2400G.)



Next problem I encountered was that Windows 10 ran unbearably slow. Somehow I managed to create the VM with the "QEMU TCG" hypervisor, rather than "KVM". This makes sense, as the former is emulation and dreadfully slow, while the latter is true hardware-assisted virtualization. (How this happened: while trying to get this to work, I also did a BIOS upgrade on the physical system, which reset all my BIOS settings, one of which disabled virtualization (called "SVM" in my BIOS). Once I corrected that, I was able to use the near-native speed KVM hypervisor.)



Next issue was that the screen resolution was stuck at 800x600. Windows wouldn't let me change it. I could do a one-time fix by pressing Esc as soon as the machine boots, right when the TianoCore splash appears. That takes me into UEFI settings, where I can force a higher resolution. But this isn't a permanent fix.



Since my virtual machine specified QXL as the video device, I needed to install the QXL drivers in Windows. This page, Creating Windows virtual machines using virtIO drivers explains how to do that. The short version is this: download the virtio-win iso on the host machine. Add it to the VM as a CD-ROM drive. Then, boot into the VM, navigate to the right folder on the CD-ROM, and install all the needed VirtIO drivers. Specifically, for QXL video on Windows 10, the "qxldod" folder has the right driver.










share|improve this question















migrated from serverfault.com Dec 31 '18 at 0:17


This question came from our site for system and network administrators.























    1















    Original Post



    I am migrating my PC from Windows 10 to Linux. There are a few things for which I still need Windows, and I am currently dual-booting, with Windows and Linux on separate physical disks. I'd like to get away from dual-booting, and run my Windows 10 installation virtualized under KVM+libvirt+qemu.



    The tricky part here seems to be that my Windows 10 install was done through UEFI (with GPT partition table), rather than legacy BIOS MBR. Here's what my Windows disk looks like:



    $ sudo parted /dev/nvme0n1 print
    Model: Unknown (unknown)
    Disk /dev/nvme0n1: 500GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:

    Number Start End Size File system Name Flags
    1 1049kB 524MB 523MB ntfs Basic data partition hidden, diag
    2 524MB 628MB 104MB fat32 EFI system partition boot, esp
    3 628MB 645MB 16.8MB Microsoft reserved partition msftres
    4 645MB 500GB 499GB ntfs Basic data partition msftdata


    Since it was setup as UEFI, it seems there are some extra steps needed to virtualize, as libvirt doesn't appear to support UEFI out of the box. What I tried was to export each of the above partitions as a qcow2 image, with a command like this:



    $ qemu-img convert -f raw -O qcow2 /dev/nvme0n1p1 win10_part1.qcow2


    And repeated for all four partitions. Then I created a virtual machine under virt-manager, importing all four of the qcow2 drives. I installed the "ovmf" package for my distro (Manjaro), and added this line into the virtual machine's XML config file, in the "os" section:



    <loader type='rom'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>


    When I boot the virtual machine, I see the TianoCore splash screen. But it just drops me into a grub2 shell, rather than finding the Windows bootloader.



    I also tried booting this VM from the Windows 10 install ISO, hoping that I could "repair" the system to boot. But that did not work.



    I'm sure I'm missing something. Even better would be to convert this to MBR boot, just to avoid the OVMF dependency.



    Edit/Update...



    Per Dylan's comment, I did get it working, but a number of small issues came up along the way, I thought I'd post them here in case others have similar issues.



    First step was, as Dylan wrote, to create an image of the whole disk, rather than individual per-partition disks. I used this command:



    qemu-img convert -f raw -O qcow2 /dev/nvme0n1 win10_import.qcow2


    I then created the virtual machine in virt-manager, specifying the above disk image ("win10_import.qcow2") as my drive.



    Next was to use the OVMF (TianoCore) UEFI firmware. This was done by installing the ovmf package ("ovmf" on Manjaro), then adding it to the virtual machine's XML definition:



      <os>
    <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
    <loader type='rom'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>
    </os>


    After that, Windows would still crash during the boot, with a bluescreen and the error "SYSTEM THREAD EXCEPTION NOT HANDLED". For some reason, it didn't like the "Copy host CPU configuration" CPU setting. I changed to "core2duo", and that booted. Right now I'm using "SandyBridge" and that also works. (For what it's worth, I did create another, separate Win10 VM doing a fresh install from scratch. That VM did work with the "Copy host CPU configuration". My CPU is AMD Ryzen 5 2400G.)



    Next problem I encountered was that Windows 10 ran unbearably slow. Somehow I managed to create the VM with the "QEMU TCG" hypervisor, rather than "KVM". This makes sense, as the former is emulation and dreadfully slow, while the latter is true hardware-assisted virtualization. (How this happened: while trying to get this to work, I also did a BIOS upgrade on the physical system, which reset all my BIOS settings, one of which disabled virtualization (called "SVM" in my BIOS). Once I corrected that, I was able to use the near-native speed KVM hypervisor.)



    Next issue was that the screen resolution was stuck at 800x600. Windows wouldn't let me change it. I could do a one-time fix by pressing Esc as soon as the machine boots, right when the TianoCore splash appears. That takes me into UEFI settings, where I can force a higher resolution. But this isn't a permanent fix.



    Since my virtual machine specified QXL as the video device, I needed to install the QXL drivers in Windows. This page, Creating Windows virtual machines using virtIO drivers explains how to do that. The short version is this: download the virtio-win iso on the host machine. Add it to the VM as a CD-ROM drive. Then, boot into the VM, navigate to the right folder on the CD-ROM, and install all the needed VirtIO drivers. Specifically, for QXL video on Windows 10, the "qxldod" folder has the right driver.










    share|improve this question















    migrated from serverfault.com Dec 31 '18 at 0:17


    This question came from our site for system and network administrators.





















      1












      1








      1








      Original Post



      I am migrating my PC from Windows 10 to Linux. There are a few things for which I still need Windows, and I am currently dual-booting, with Windows and Linux on separate physical disks. I'd like to get away from dual-booting, and run my Windows 10 installation virtualized under KVM+libvirt+qemu.



      The tricky part here seems to be that my Windows 10 install was done through UEFI (with GPT partition table), rather than legacy BIOS MBR. Here's what my Windows disk looks like:



      $ sudo parted /dev/nvme0n1 print
      Model: Unknown (unknown)
      Disk /dev/nvme0n1: 500GB
      Sector size (logical/physical): 512B/512B
      Partition Table: gpt
      Disk Flags:

      Number Start End Size File system Name Flags
      1 1049kB 524MB 523MB ntfs Basic data partition hidden, diag
      2 524MB 628MB 104MB fat32 EFI system partition boot, esp
      3 628MB 645MB 16.8MB Microsoft reserved partition msftres
      4 645MB 500GB 499GB ntfs Basic data partition msftdata


      Since it was setup as UEFI, it seems there are some extra steps needed to virtualize, as libvirt doesn't appear to support UEFI out of the box. What I tried was to export each of the above partitions as a qcow2 image, with a command like this:



      $ qemu-img convert -f raw -O qcow2 /dev/nvme0n1p1 win10_part1.qcow2


      And repeated for all four partitions. Then I created a virtual machine under virt-manager, importing all four of the qcow2 drives. I installed the "ovmf" package for my distro (Manjaro), and added this line into the virtual machine's XML config file, in the "os" section:



      <loader type='rom'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>


      When I boot the virtual machine, I see the TianoCore splash screen. But it just drops me into a grub2 shell, rather than finding the Windows bootloader.



      I also tried booting this VM from the Windows 10 install ISO, hoping that I could "repair" the system to boot. But that did not work.



      I'm sure I'm missing something. Even better would be to convert this to MBR boot, just to avoid the OVMF dependency.



      Edit/Update...



      Per Dylan's comment, I did get it working, but a number of small issues came up along the way, I thought I'd post them here in case others have similar issues.



      First step was, as Dylan wrote, to create an image of the whole disk, rather than individual per-partition disks. I used this command:



      qemu-img convert -f raw -O qcow2 /dev/nvme0n1 win10_import.qcow2


      I then created the virtual machine in virt-manager, specifying the above disk image ("win10_import.qcow2") as my drive.



      Next was to use the OVMF (TianoCore) UEFI firmware. This was done by installing the ovmf package ("ovmf" on Manjaro), then adding it to the virtual machine's XML definition:



        <os>
      <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
      <loader type='rom'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>
      </os>


      After that, Windows would still crash during the boot, with a bluescreen and the error "SYSTEM THREAD EXCEPTION NOT HANDLED". For some reason, it didn't like the "Copy host CPU configuration" CPU setting. I changed to "core2duo", and that booted. Right now I'm using "SandyBridge" and that also works. (For what it's worth, I did create another, separate Win10 VM doing a fresh install from scratch. That VM did work with the "Copy host CPU configuration". My CPU is AMD Ryzen 5 2400G.)



      Next problem I encountered was that Windows 10 ran unbearably slow. Somehow I managed to create the VM with the "QEMU TCG" hypervisor, rather than "KVM". This makes sense, as the former is emulation and dreadfully slow, while the latter is true hardware-assisted virtualization. (How this happened: while trying to get this to work, I also did a BIOS upgrade on the physical system, which reset all my BIOS settings, one of which disabled virtualization (called "SVM" in my BIOS). Once I corrected that, I was able to use the near-native speed KVM hypervisor.)



      Next issue was that the screen resolution was stuck at 800x600. Windows wouldn't let me change it. I could do a one-time fix by pressing Esc as soon as the machine boots, right when the TianoCore splash appears. That takes me into UEFI settings, where I can force a higher resolution. But this isn't a permanent fix.



      Since my virtual machine specified QXL as the video device, I needed to install the QXL drivers in Windows. This page, Creating Windows virtual machines using virtIO drivers explains how to do that. The short version is this: download the virtio-win iso on the host machine. Add it to the VM as a CD-ROM drive. Then, boot into the VM, navigate to the right folder on the CD-ROM, and install all the needed VirtIO drivers. Specifically, for QXL video on Windows 10, the "qxldod" folder has the right driver.










      share|improve this question
















      Original Post



      I am migrating my PC from Windows 10 to Linux. There are a few things for which I still need Windows, and I am currently dual-booting, with Windows and Linux on separate physical disks. I'd like to get away from dual-booting, and run my Windows 10 installation virtualized under KVM+libvirt+qemu.



      The tricky part here seems to be that my Windows 10 install was done through UEFI (with GPT partition table), rather than legacy BIOS MBR. Here's what my Windows disk looks like:



      $ sudo parted /dev/nvme0n1 print
      Model: Unknown (unknown)
      Disk /dev/nvme0n1: 500GB
      Sector size (logical/physical): 512B/512B
      Partition Table: gpt
      Disk Flags:

      Number Start End Size File system Name Flags
      1 1049kB 524MB 523MB ntfs Basic data partition hidden, diag
      2 524MB 628MB 104MB fat32 EFI system partition boot, esp
      3 628MB 645MB 16.8MB Microsoft reserved partition msftres
      4 645MB 500GB 499GB ntfs Basic data partition msftdata


      Since it was setup as UEFI, it seems there are some extra steps needed to virtualize, as libvirt doesn't appear to support UEFI out of the box. What I tried was to export each of the above partitions as a qcow2 image, with a command like this:



      $ qemu-img convert -f raw -O qcow2 /dev/nvme0n1p1 win10_part1.qcow2


      And repeated for all four partitions. Then I created a virtual machine under virt-manager, importing all four of the qcow2 drives. I installed the "ovmf" package for my distro (Manjaro), and added this line into the virtual machine's XML config file, in the "os" section:



      <loader type='rom'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>


      When I boot the virtual machine, I see the TianoCore splash screen. But it just drops me into a grub2 shell, rather than finding the Windows bootloader.



      I also tried booting this VM from the Windows 10 install ISO, hoping that I could "repair" the system to boot. But that did not work.



      I'm sure I'm missing something. Even better would be to convert this to MBR boot, just to avoid the OVMF dependency.



      Edit/Update...



      Per Dylan's comment, I did get it working, but a number of small issues came up along the way, I thought I'd post them here in case others have similar issues.



      First step was, as Dylan wrote, to create an image of the whole disk, rather than individual per-partition disks. I used this command:



      qemu-img convert -f raw -O qcow2 /dev/nvme0n1 win10_import.qcow2


      I then created the virtual machine in virt-manager, specifying the above disk image ("win10_import.qcow2") as my drive.



      Next was to use the OVMF (TianoCore) UEFI firmware. This was done by installing the ovmf package ("ovmf" on Manjaro), then adding it to the virtual machine's XML definition:



        <os>
      <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
      <loader type='rom'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>
      </os>


      After that, Windows would still crash during the boot, with a bluescreen and the error "SYSTEM THREAD EXCEPTION NOT HANDLED". For some reason, it didn't like the "Copy host CPU configuration" CPU setting. I changed to "core2duo", and that booted. Right now I'm using "SandyBridge" and that also works. (For what it's worth, I did create another, separate Win10 VM doing a fresh install from scratch. That VM did work with the "Copy host CPU configuration". My CPU is AMD Ryzen 5 2400G.)



      Next problem I encountered was that Windows 10 ran unbearably slow. Somehow I managed to create the VM with the "QEMU TCG" hypervisor, rather than "KVM". This makes sense, as the former is emulation and dreadfully slow, while the latter is true hardware-assisted virtualization. (How this happened: while trying to get this to work, I also did a BIOS upgrade on the physical system, which reset all my BIOS settings, one of which disabled virtualization (called "SVM" in my BIOS). Once I corrected that, I was able to use the near-native speed KVM hypervisor.)



      Next issue was that the screen resolution was stuck at 800x600. Windows wouldn't let me change it. I could do a one-time fix by pressing Esc as soon as the machine boots, right when the TianoCore splash appears. That takes me into UEFI settings, where I can force a higher resolution. But this isn't a permanent fix.



      Since my virtual machine specified QXL as the video device, I needed to install the QXL drivers in Windows. This page, Creating Windows virtual machines using virtIO drivers explains how to do that. The short version is this: download the virtio-win iso on the host machine. Add it to the VM as a CD-ROM drive. Then, boot into the VM, navigate to the right folder on the CD-ROM, and install all the needed VirtIO drivers. Specifically, for QXL video on Windows 10, the "qxldod" folder has the right driver.







      windows-10 uefi linux-kvm p2v






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 1 at 16:50







      Matt

















      asked Dec 30 '18 at 22:38









      MattMatt

      1084




      1084




      migrated from serverfault.com Dec 31 '18 at 0:17


      This question came from our site for system and network administrators.









      migrated from serverfault.com Dec 31 '18 at 0:17


      This question came from our site for system and network administrators.
























          2 Answers
          2






          active

          oldest

          votes


















          1














          QEMU/Libvirt expect you to provide virtual disk: your QCOW2 files should be disk and not partitions. By doing what you did, you got 4 qcow2 files, each with a single partition. You broke the previous structure, it is not a surprise that GRUB cannot boot your system anymore.



          I suggest you to convert the whole physical drive to a single QCOW2 file, and then attach this virtual drive to your VM.



          You should be able to remove the GRUB EFI file from the EFI partition (see libguestfs tools) and get ride of the boot menu, as the Windows boot loader should be loaded by the VM's UEFI.






          share|improve this answer































            0














            If anyone else stumbles onto this question, there is another alternative for using a native windows install as a VM in Linux:




            1. Image the whole device as as per Dylan's accepted answer.

            2. Run the VM from the raw storage.


            I managed #2 above, but it can be quite involved. It becomes quite complex and risk if both Linux and Windows share the same device.



            It's only worth the extra effort for various reasons:




            • Already have and like a dual boot setup.

            • Need to run windows directly on hardware.


              • Graphics performance for games (and don't have a motherboard/setup able to do PCI passthrough with 2x GPU, etc).

              • Overly sensitive audio applications such as Skype for Business that work poorly through virtualized audio devices.



            • Want the convenience of a VM for running other less demanding windows apps like MS office, etc.


            There were numerous caveats/workarounds:




            • I had a fight getting windows to remain activated as it's obviously tying licenses to hardware. Toil with adding motherboard/BIOS serial numbers, exact CPU model, and storage device serial numbers seemed to help.

            • Add udev rules to make Linux/Nautilus/Gnome File manager ignore the windows partitions.

            • Due to paranoia (worried windows updates might affect grub/boot setup), I didn't just share my whole raw drive with the VM. Instead:


              • I cloned the partition table (GPT) and EFI partition to files, and also created a fake end of device image file.

              • Used the loopback driver to treat the cloned images as devices

              • Used the MD (multi-device) driver via mdadm linear setup to chain all the needed parts together as a hybrid imaged and raw device for the VM. E.g. md0 built from <GPT table clone image/loopback> + <recovery raw> + <EFI clone image/loopback> + <windows system raw> + <end of device GPT backup table/loopback>.

              • Used gdisk and testdisk to correct/adjust the partition tables as needed.

              • 1803 windows 10 update threw in an extra partiton I had to adjust for! New partition appears after installing the Windows 10 April Update. Needed to correct again...




            I used a similar setup on a 2nd system, but made my life much, much easier by have 2 seperate storage devices, one for Linux, the other for Windows.






            share|improve this answer

























              Your Answer








              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "3"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1389103%2fwindows-10-uefi-physical-to-kvm-libvirt-virtual%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1














              QEMU/Libvirt expect you to provide virtual disk: your QCOW2 files should be disk and not partitions. By doing what you did, you got 4 qcow2 files, each with a single partition. You broke the previous structure, it is not a surprise that GRUB cannot boot your system anymore.



              I suggest you to convert the whole physical drive to a single QCOW2 file, and then attach this virtual drive to your VM.



              You should be able to remove the GRUB EFI file from the EFI partition (see libguestfs tools) and get ride of the boot menu, as the Windows boot loader should be loaded by the VM's UEFI.






              share|improve this answer




























                1














                QEMU/Libvirt expect you to provide virtual disk: your QCOW2 files should be disk and not partitions. By doing what you did, you got 4 qcow2 files, each with a single partition. You broke the previous structure, it is not a surprise that GRUB cannot boot your system anymore.



                I suggest you to convert the whole physical drive to a single QCOW2 file, and then attach this virtual drive to your VM.



                You should be able to remove the GRUB EFI file from the EFI partition (see libguestfs tools) and get ride of the boot menu, as the Windows boot loader should be loaded by the VM's UEFI.






                share|improve this answer


























                  1












                  1








                  1







                  QEMU/Libvirt expect you to provide virtual disk: your QCOW2 files should be disk and not partitions. By doing what you did, you got 4 qcow2 files, each with a single partition. You broke the previous structure, it is not a surprise that GRUB cannot boot your system anymore.



                  I suggest you to convert the whole physical drive to a single QCOW2 file, and then attach this virtual drive to your VM.



                  You should be able to remove the GRUB EFI file from the EFI partition (see libguestfs tools) and get ride of the boot menu, as the Windows boot loader should be loaded by the VM's UEFI.






                  share|improve this answer













                  QEMU/Libvirt expect you to provide virtual disk: your QCOW2 files should be disk and not partitions. By doing what you did, you got 4 qcow2 files, each with a single partition. You broke the previous structure, it is not a surprise that GRUB cannot boot your system anymore.



                  I suggest you to convert the whole physical drive to a single QCOW2 file, and then attach this virtual drive to your VM.



                  You should be able to remove the GRUB EFI file from the EFI partition (see libguestfs tools) and get ride of the boot menu, as the Windows boot loader should be loaded by the VM's UEFI.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Dec 30 '18 at 22:54









                  DylanDylan

                  262




                  262

























                      0














                      If anyone else stumbles onto this question, there is another alternative for using a native windows install as a VM in Linux:




                      1. Image the whole device as as per Dylan's accepted answer.

                      2. Run the VM from the raw storage.


                      I managed #2 above, but it can be quite involved. It becomes quite complex and risk if both Linux and Windows share the same device.



                      It's only worth the extra effort for various reasons:




                      • Already have and like a dual boot setup.

                      • Need to run windows directly on hardware.


                        • Graphics performance for games (and don't have a motherboard/setup able to do PCI passthrough with 2x GPU, etc).

                        • Overly sensitive audio applications such as Skype for Business that work poorly through virtualized audio devices.



                      • Want the convenience of a VM for running other less demanding windows apps like MS office, etc.


                      There were numerous caveats/workarounds:




                      • I had a fight getting windows to remain activated as it's obviously tying licenses to hardware. Toil with adding motherboard/BIOS serial numbers, exact CPU model, and storage device serial numbers seemed to help.

                      • Add udev rules to make Linux/Nautilus/Gnome File manager ignore the windows partitions.

                      • Due to paranoia (worried windows updates might affect grub/boot setup), I didn't just share my whole raw drive with the VM. Instead:


                        • I cloned the partition table (GPT) and EFI partition to files, and also created a fake end of device image file.

                        • Used the loopback driver to treat the cloned images as devices

                        • Used the MD (multi-device) driver via mdadm linear setup to chain all the needed parts together as a hybrid imaged and raw device for the VM. E.g. md0 built from <GPT table clone image/loopback> + <recovery raw> + <EFI clone image/loopback> + <windows system raw> + <end of device GPT backup table/loopback>.

                        • Used gdisk and testdisk to correct/adjust the partition tables as needed.

                        • 1803 windows 10 update threw in an extra partiton I had to adjust for! New partition appears after installing the Windows 10 April Update. Needed to correct again...




                      I used a similar setup on a 2nd system, but made my life much, much easier by have 2 seperate storage devices, one for Linux, the other for Windows.






                      share|improve this answer






























                        0














                        If anyone else stumbles onto this question, there is another alternative for using a native windows install as a VM in Linux:




                        1. Image the whole device as as per Dylan's accepted answer.

                        2. Run the VM from the raw storage.


                        I managed #2 above, but it can be quite involved. It becomes quite complex and risk if both Linux and Windows share the same device.



                        It's only worth the extra effort for various reasons:




                        • Already have and like a dual boot setup.

                        • Need to run windows directly on hardware.


                          • Graphics performance for games (and don't have a motherboard/setup able to do PCI passthrough with 2x GPU, etc).

                          • Overly sensitive audio applications such as Skype for Business that work poorly through virtualized audio devices.



                        • Want the convenience of a VM for running other less demanding windows apps like MS office, etc.


                        There were numerous caveats/workarounds:




                        • I had a fight getting windows to remain activated as it's obviously tying licenses to hardware. Toil with adding motherboard/BIOS serial numbers, exact CPU model, and storage device serial numbers seemed to help.

                        • Add udev rules to make Linux/Nautilus/Gnome File manager ignore the windows partitions.

                        • Due to paranoia (worried windows updates might affect grub/boot setup), I didn't just share my whole raw drive with the VM. Instead:


                          • I cloned the partition table (GPT) and EFI partition to files, and also created a fake end of device image file.

                          • Used the loopback driver to treat the cloned images as devices

                          • Used the MD (multi-device) driver via mdadm linear setup to chain all the needed parts together as a hybrid imaged and raw device for the VM. E.g. md0 built from <GPT table clone image/loopback> + <recovery raw> + <EFI clone image/loopback> + <windows system raw> + <end of device GPT backup table/loopback>.

                          • Used gdisk and testdisk to correct/adjust the partition tables as needed.

                          • 1803 windows 10 update threw in an extra partiton I had to adjust for! New partition appears after installing the Windows 10 April Update. Needed to correct again...




                        I used a similar setup on a 2nd system, but made my life much, much easier by have 2 seperate storage devices, one for Linux, the other for Windows.






                        share|improve this answer




























                          0












                          0








                          0







                          If anyone else stumbles onto this question, there is another alternative for using a native windows install as a VM in Linux:




                          1. Image the whole device as as per Dylan's accepted answer.

                          2. Run the VM from the raw storage.


                          I managed #2 above, but it can be quite involved. It becomes quite complex and risk if both Linux and Windows share the same device.



                          It's only worth the extra effort for various reasons:




                          • Already have and like a dual boot setup.

                          • Need to run windows directly on hardware.


                            • Graphics performance for games (and don't have a motherboard/setup able to do PCI passthrough with 2x GPU, etc).

                            • Overly sensitive audio applications such as Skype for Business that work poorly through virtualized audio devices.



                          • Want the convenience of a VM for running other less demanding windows apps like MS office, etc.


                          There were numerous caveats/workarounds:




                          • I had a fight getting windows to remain activated as it's obviously tying licenses to hardware. Toil with adding motherboard/BIOS serial numbers, exact CPU model, and storage device serial numbers seemed to help.

                          • Add udev rules to make Linux/Nautilus/Gnome File manager ignore the windows partitions.

                          • Due to paranoia (worried windows updates might affect grub/boot setup), I didn't just share my whole raw drive with the VM. Instead:


                            • I cloned the partition table (GPT) and EFI partition to files, and also created a fake end of device image file.

                            • Used the loopback driver to treat the cloned images as devices

                            • Used the MD (multi-device) driver via mdadm linear setup to chain all the needed parts together as a hybrid imaged and raw device for the VM. E.g. md0 built from <GPT table clone image/loopback> + <recovery raw> + <EFI clone image/loopback> + <windows system raw> + <end of device GPT backup table/loopback>.

                            • Used gdisk and testdisk to correct/adjust the partition tables as needed.

                            • 1803 windows 10 update threw in an extra partiton I had to adjust for! New partition appears after installing the Windows 10 April Update. Needed to correct again...




                          I used a similar setup on a 2nd system, but made my life much, much easier by have 2 seperate storage devices, one for Linux, the other for Windows.






                          share|improve this answer















                          If anyone else stumbles onto this question, there is another alternative for using a native windows install as a VM in Linux:




                          1. Image the whole device as as per Dylan's accepted answer.

                          2. Run the VM from the raw storage.


                          I managed #2 above, but it can be quite involved. It becomes quite complex and risk if both Linux and Windows share the same device.



                          It's only worth the extra effort for various reasons:




                          • Already have and like a dual boot setup.

                          • Need to run windows directly on hardware.


                            • Graphics performance for games (and don't have a motherboard/setup able to do PCI passthrough with 2x GPU, etc).

                            • Overly sensitive audio applications such as Skype for Business that work poorly through virtualized audio devices.



                          • Want the convenience of a VM for running other less demanding windows apps like MS office, etc.


                          There were numerous caveats/workarounds:




                          • I had a fight getting windows to remain activated as it's obviously tying licenses to hardware. Toil with adding motherboard/BIOS serial numbers, exact CPU model, and storage device serial numbers seemed to help.

                          • Add udev rules to make Linux/Nautilus/Gnome File manager ignore the windows partitions.

                          • Due to paranoia (worried windows updates might affect grub/boot setup), I didn't just share my whole raw drive with the VM. Instead:


                            • I cloned the partition table (GPT) and EFI partition to files, and also created a fake end of device image file.

                            • Used the loopback driver to treat the cloned images as devices

                            • Used the MD (multi-device) driver via mdadm linear setup to chain all the needed parts together as a hybrid imaged and raw device for the VM. E.g. md0 built from <GPT table clone image/loopback> + <recovery raw> + <EFI clone image/loopback> + <windows system raw> + <end of device GPT backup table/loopback>.

                            • Used gdisk and testdisk to correct/adjust the partition tables as needed.

                            • 1803 windows 10 update threw in an extra partiton I had to adjust for! New partition appears after installing the Windows 10 April Update. Needed to correct again...




                          I used a similar setup on a 2nd system, but made my life much, much easier by have 2 seperate storage devices, one for Linux, the other for Windows.







                          share|improve this answer














                          share|improve this answer



                          share|improve this answer








                          edited yesterday

























                          answered yesterday









                          JPvRielJPvRiel

                          60189




                          60189






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Super User!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1389103%2fwindows-10-uefi-physical-to-kvm-libvirt-virtual%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

                              Alcedinidae

                              Origin of the phrase “under your belt”?