arrow_upward

Pages (2):
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
How to Host(/mirror) your Own online KVM VPS Locally
#1
There are times when we want to replicate our own online VPS locally whatever the reason behind that decision; this HowTo tutorial is just about this process done for a QEMU/KVM guest system.

Some Context:
During last June I decided to terminate my VPS-9 hosting of a GIS project that I've deployed there over a period of 2 months. The general experience was fair but I've never let it run on autopilot mainly because of CPU issues (stealness issues and CPU spikes (under certain circumstances) with time spans edging the sponsor's AUP.

Because everything was already perfectly setup and working just fine, I decided to replicate my VPS-9 locally, on my Fedora Server 32, which is running KVM.

The replication process consists in two steps:
  1. The first thing we have to do is to clone VPS-9 virtual disk (ie /dev/vda) over the network using dd.
  2. Once we get our block device then we can setup our KVM VPS by mimicking the same configuration used by VirMach.
In this post, I'll only touch on the how to clone a block device over the network. The VPS creation will be addressed in the next post.

Cloning your VPS storage device over the network
As with pretty much anything in Linux, there are many ways to clone a block device over the network. But the most simple and straightforward way is to use the old and time-tested DD tool over SSH.

Before showing the command I've used, I must first stress on the fact that while you have to keep your VPS online, it must be kept in a quasi idle state. That means all its services must be turned off, everything!.. Of course you'll still have some residual activity, but what we aim for is that the filesystem should not be altered too much while we're copying the entire block device. You must remember that transferring 100 GiB (the size of VirMach VPS-9) over the network will take hours!

Now for the command:
ssh -i ~/.ssh/phoenix-ecdsa-key xxx.xxx.xxx.xxx -p SSH-PORT "sudo dd if=/dev/vda bs=16M conv=sparse,notrunc,noerror status=progress | gzip -1 -" | dd bs=16M of=phoenix.img.gz

What this command does is first initiate the SSH login via our public key authentication then, when done, run the dd command as a sudoer. But for this to work you'll have to let your sudoer run commands without entering the password everytime [1].

Now let's see what the bellow subcommand does:
sudo dd if=/dev/vda bs=16M conv=sparse,notrunc,noerror status=progress | gzip -1 -
This command, if successful, will end up generating a gzipped stream of our block device that will feed the next portion of the code highlighted in the code bellow.
<....> | dd bs=16M of=phoenix.img.gz
This portion of the code will simply get the streamed feed and store it locally as the 'phoenix.img.gz' image file.

Back to the dd command. it was set to copy our block device, which was /dev/vda, in 16 MB block size while ignoring all read errors and constantly updating us on its progress.

That's a hell of a command!.. Isn't it?

Here is the whole output in my case, from start to finish, some 30 hours latter:
[me@local media]$ ssh -i ~/.ssh/phoenix-ecdsa-key xxx.xxx.xxx.xxx -p SSH-PORT "sudo dd if=/dev/vda bs=16M conv=sparse,notrunc,noerror status=progress | gzip -1 -" | dd bs=16M of=phoenix.img.gz
107374182400 bytes (107 GB, 100 GiB) copied, 118118 s, 909 kB/s
6400+0 records in
6400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 118118 s, 909 kB/s
0+2428890 records in
0+2428890 records out
47201888523 bytes (47 GB, 44 GiB) copied, 118125 s, 400 kB/s

Once I have my block device on my PC, I can now start setting up my VPS-9 locally; but that for the next post.

Stay tuned!...

------------------------
[1]-Generally, it's just a matter of appending this line at the end of the '/etc/sudoers' file:
sudoerUserName ALL=(ALL) NOPASSWD: ALL


Edited!..
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#2
In the previous post I've shown how to clone a block device over the network and I did give the example of cloning my VPS-9 block device that we'll be assuming that's located at :
/media/phoenix.img.gz

By decompressing it, we'll have the raw disk image of my VPS-9:
gzip -d /media/phoenix.img.gz
ls -al /media | grep phoenix.img
-rw-r--r--. 1 userX userX 107374182400 Jul  14 09:27 phoenix.img

Now is the time to build a KVM guest VM around the phoenix.img disk image. For that we need both QEMU/KVM and Libvirt.

Linux comes with native support for virtualization extensions thanks to a kernel module called KVM (Kernel based Virtual Machine). KVM simply turns the Linux kernel into a hypervisor when installed. While QEMU is a userland program that does hardware emulation while closely working with KVM to allow the creation of VMs with all its adhoc hardware and peripherals. Libvirt, on the other hand, is the API layer for VM management, ie VM creation, starting, stoping, destroying etc...

Of course, I won't delve into the details of these three technologies -outside the scope of this thread- nor on how to install them in different distributions, suffice to say that on Fedora, the process of installing all these three technologies is fairly simple:
sudo dnf install @virtualization

From now on I'll assume that both QEMU/KVM and Libvirt are available on the system and running smoothly without any problem.

Building a KVM guest around an existing disk image
A new VM can be created either with the graphical virt-manager or with the command-line tool virt-install. Given that I don't like to bother with including images in this post, I'll be using the more versatile option that's virt-install.

Besides virt-manager doesn't differ match from any other Virtualization software that the average users are already familiar with (eg VMware, VBox etc..)

Now, to create a basic KVM VM, I can simply run the following command -with this minimum pieces of information- and libvirt will happily be filling all the missing parameters with their defaults:
virt-install --name=centos8Phoenix \
--os-type=linux --os-variant=centos8 \
--ram=8192 --vcpus=2 \
--disk path=/media/phoenix.img \
--import

Above, I gave the VM a name and set the RAM size and the number of vCPUs while indicating the disk image's operating system which is CentOS 8. Then I pointed to the location of my raw disk image while specifying the import parameter which tells libvirt to skip the OS installation process and just boot from the disk.

Now this will work if all we wanted is to build a VM from the VPS disk BUT that's not our goal!.. Remember!.. what we want is to emulate the specifics of Phoenix VPS-9 system in the details and that's what we will be doing in the next post.

Before ending this post, I have to warn you that when you run the above command for the first time, the VM will fail to boot and will enter the emergency mode with a message like this:
Generating "/run/inittramfs/rdsosreport.txt"

Entering emergency mode. Exit the shell to continue.
(...)

:/#

This is normal, as your raw disk image has some inconsistencies(/errors) in its filesystem, due to the residual filesystem activity (logging etc..) that we've talked about in the previous post.

For an LVM-based, XFS filesystem like mine all you have to do is to run this repair command:
# cl-root is my LVM root partition
xfs_repair  /dev/mapper/cl-root

When the above command finishes the slightly corrupted filesystem will be fixed and all you need to do is to reboot your system and you're all set, unless you really want to emulate your online P4V VPS in the details, in which case ..

Stay tuned for the next post of this topic...
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#3
This is such a great tutorial! I found something like this online when I was having to reset up my VPS so I could upgrade and get everything setup how it needed to be then I unzipped the file and uploaded the content I needed for my sites. This is very clearly wrote out too and I can't wait to see the rest of it!
Thank you to CubeData and Posts4VPS for the services of VPS 8.
#4
If you're still with me at this point then you're really serious about recreating a KVM VPS guest as close specs-wise as you can get/afford with your locally available resources. Thus, please, read on!


In the previous posts I've shown how to clone a block device over the network (Post #1) and how to build a KVM guest around it (Post #2). In this one, we'll learn how to fine-tune the virt-install command as to make it create a KVM VM that meets a set of requirements. As an example here, the requirements will be meeting Phoenix VPS-9 specs as close as possible in their details.

To achieve this goal we'll need 2 things:
  1. on the one hand, we need to know a bit more about VirMach's Phoenix VPS-9 internals and,
  2. on the other hand, we need to be familiar enough with the virt-intall command as to let it build a VM with that fine-tuned spec.


Before we get started, I must say that I will break down this(/the original) post into a series of smaller less overwhelming posts approximating VirMach VPS-9 specs at each step of the way. Thus in this post, I'll only focus on VPS-9 chipsets, ie the QEMU's machine type.

Please be aware that I'm using Fedora Server 33 as my KVM host.

A Closer Look at VirMach VPS-9 Specs --Phoenix VPS-9 as an example
The first question we should try to answer is what virtual hardware did VirMach put in their P4V-sponsored VPS-9(s) ? To figure that out any VPS-9 holder can simply run the lshw command and read its output, like so in VPS-9@Buffalo:
[root @ kvm-Post2Host-.... ~]# lshw
kvm-post2host-buffalo      
   description: Computer
   product: KVM
   vendor: Red Hat
   version: RHEL 6.6.0 PC
   width: 64 bits
   capabilities: smbios-2.4 dmi-2.4 smp vsyscall32
   configuration: boot=normal family=Red Hat Enterprise Linux uuid=.................................
 *-core
      description: Motherboard
      physical id: 0
    *-firmware
         description: BIOS
         vendor: Seabios
         physical id: 0
         version: 0.5.1
         date: 01/01/2007
         size: 96KiB
    *-cpu:0
         description: CPU
         product: Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
         vendor: Intel Corp.
         physical id: 401
         bus info: cpu@0
         slot: CPU 1
         size: 2GHz
         capacity: 2GHz
         width: 64 bits
         capabilities: ...................__ENABLED_CPU_EXTENDIONS_LIST__.......................
    *-cpu:1
         description: CPU
         product: Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
         vendor: Intel Corp.
         physical id: 402
         bus info: cpu@1
         slot: CPU 2
         size: 2GHz
         capacity: 2GHz
         width: 64 bits
         capabilities: ...................__ENABLED_CPU_EXTENDIONS_LIST__.......................
    *-memory
         description: System Memory
         physical id: 1000
         size: 8GiB
         capabilities: ecc
         configuration: errordetection=multi-bit-ecc
       *-bank
            description: DIMM RAM
            physical id: 0
            slot: DIMM 0
            size: 8GiB
            width: 64 bits
    *-pci
         description: Host bridge
         product: 440FX - 82441FX PMC [Natoma]
         vendor: Intel Corporation
         physical id: 100
         bus info: pci@0000:00:00.0
         version: 02
         width: 32 bits
         clock: 33MHz
       *-isa
            description: ISA bridge
            product: 82371SB PIIX3 ISA [Natoma/Triton II]
            vendor: Intel Corporation
            physical id: 1
            bus info: pci@0000:00:01.0
            version: 00
            width: 32 bits
            clock: 33MHz
            capabilities: isa bus_master
            configuration: ......................
       *-ide
            description: IDE interface
            product: 82371SB PIIX3 IDE [Natoma/Triton II]
            vendor: Intel Corporation
            physical id: 1.1
            bus info: pci@0000:00:01.1
            logical name: scsi1
            version: 00
            width: 32 bits
            clock: 33MHz
            capabilities: ide isa_compat_mode bus_master emulated
            configuration: driver=ata_piix latency=0
            resources: .......................
          *-cdrom
               description: SCSI CD-ROM
               product: QEMU DVD-ROM
               vendor: QEMU
               physical id: 0.0.0
               bus info: scsi@1:0.0.0
               logical name: /dev/cdrom
               logical name: /dev/sr0
               version: 0.12
               capabilities: removable audio
               configuration: ......................
       *-usb
            description: USB controller
            product: 82371SB PIIX3 USB [Natoma/Triton II]
            vendor: Intel Corporation
            physical id: 1.2
            bus info: pci@0000:00:01.2
            version: 01
            width: 32 bits
            clock: 33MHz
            capabilities: uhci bus_master
            configuration: driver=uhci_hcd latency=0
            resources: ..................
          *-usbhost
               product: UHCI Host Controller
               vendor: Linux ............ uhci_hcd
               physical id: 1
               bus info: usb@1
               logical name: usb1
               version: 4.18
               capabilities: usb-1.10
               configuration: .............
       *-bridge
            description: Bridge
            product: 82371AB/EB/MB PIIX4 ACPI
            vendor: Intel Corporation
            physical id: 1.3
            bus info: pci@0000:00:01.3
            version: 03
            width: 32 bits
            clock: 33MHz
            capabilities: bridge
            configuration: driver=piix4_smbus latency=0
            resources: ............
       *-display
            description: VGA compatible controller
            product: GD 5446
            vendor: Cirrus Logic
            physical id: 2
            bus info: pci@0000:00:02.0
            version: 00
            width: 32 bits
            clock: 33MHz
            capabilities: vga_controller rom
            configuration: driver=cirrus latency=0
            resources: .........................
       *-network
            description: Ethernet controller
            product: Virtio network device
            vendor: Red Hat, Inc.
            physical id: 3
            bus info: pci@0000:00:03.0
            version: 00
            width: 32 bits
            clock: 33MHz
            capabilities: msix bus_master cap_list rom
            configuration: driver=virtio-pci latency=0
            resources: ....................
          *-virtio0
               description: Ethernet interface
               physical id: 0
               bus info: virtio@0
               logical name: eth0
               serial: ...................
               capabilities: ethernet physical
               configuration: .......................
       *-scsi
            description: SCSI storage controller
            product: Virtio block device
            vendor: Red Hat, Inc.
            physical id: 4
            bus info: pci@0000:00:04.0
            version: 00
            width: 32 bits
            clock: 33MHz
            capabilities: scsi msix bus_master cap_list
            configuration: driver=virtio-pci latency=0
            resources: .....................................
          *-virtio1
               description: Virtual I/O device
               physical id: 0
               bus info: virtio@1
               logical name: /dev/vda
               size: ................................
             *-volume:0
                  description: EXT3 volume
                  vendor: Linux
                  physical id: 1
                  bus info: virtio@1,1
                  logical name: /dev/vda1
                  logical name: /
                  version: 1.0
                  ............................
             *-volume:1
                  description: Linux swap volume
                  physical id: 2
                  bus info: virtio@1,2
                  logical name: /dev/vda2
                  version: 1
                  size: ..........................
       *-memory
            description: RAM memory
            product: Virtio memory balloon
            vendor: Red Hat, Inc.
            physical id: 5
            bus info: pci@0000:00:05.0
            version: 00
            width: 32 bits
            clock: 33MHz (30.3ns)
            capabilities: bus_master
            configuration: driver=virtio-pci latency=0
            resources: ...................
          *-virtio2 UNCLAIMED
               description: Virtual I/O device
               physical id: 0
               bus info: virtio@2
               configuration: driver=virtio_balloon
    *-pnp00:00
        .....................................
[root@kvm-Post2Host-.... ~]#

For obvious reasons, I redacted the output but will show, subsequently, the relevant sections in more details by class of hardware.

1. VirMarch's VPS-9 Used Machine-type:
As shown above (in the core and pci section) and below (see system and businfo outputs), VPS-9 is using a dated 64bit version of the PC machine-type, which is an alias of  pc-i440fx-rhel6.6.0 which still uses a rather outdated Seabios version: 0.5.1.
[root@vps-9 ~]# lshw -c system
my.hostname.com        
   description: Computer
   product: KVM
   vendor: Red Hat
   version: RHEL 6.6.0 PC
   width: 64 bits
   capabilities: smbios-2.4 dmi-2.4 smp vsyscall32
   configuration: boot=normal family=Red Hat Enterprise Linux uuid=.....
 *-pnp00:00
      product: PnP device PNP0b00
      physical id: 0
      capabilities: pnp
      configuration: driver=rtc_cmos

[root@vps-9 ~]# lshw -businfo
Bus info          Device      Class      Description
====================================================
                             system     KVM
                             bus        Motherboard
                             memory     96KiB BIOS
......
pci@0000:00:00.0              bridge     440FX - 82441FX PMC [Natoma]
pci@0000:00:01.0              bridge     82371SB PIIX3 ISA [Natoma/Triton II]
......

With the above information in mind, we're able to fine-tune our virt-install command like so:
virt-install  --virt-type=kvm --hvm --arch=x86_64 --machine=pc \
--name=centos8Phoenix \
--os-type=linux --os-variant=centos8 \
--ram=8192 --vcpus=2 \
--disk path=/media/phoenix.img \
--import

Why setting the machine-type is important in this case?.. The answer is simply because not setting it will cause libvirt to use QEMU's current default value on Fedora which is 'q35' (Q35 + ICH9, 2009), while setting it to 'pc' (i440FX + PIIX, 1996) will set the machine-type to the most recent version of the old PC machine-type (ie pc-i440fx-5.1, as of now.)

If you need to use the most recent pc-i440fx-rhelx.x.x, then you need to use RHEL or CentOS as the KVM host.

In the next post we'll take a look at fine-tuning the guest CPU.
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#5
In this post we'll talk a bit about QEMU/KVM guest CPUs used in VirMach VPS-9(s) before ending with the right virt-install command that will approximate the CPU model used in Phoenix VPS-9.

To make the case, i'll use the already published data of 3 VPS-9 (Atlanta, L.A., Seattle.) I'll also use my own -still unpublished- data on VPS-9 @Phoenix and @Buffalo.

1-VPS-9 Used Guest CPUs:
From the available data, we have:

1.1- VPS 9 (Atlanta)
https://post4vps.com/Thread-Virmach-VPS-...-Dream-VPS
Processor       : QEMU Virtual CPU version (cpu64-rhel6)
CPU Cores       : 2 @ 2499.998 MHz
Kernel          : 3.10.0-1062.18.1.el7.x86_64

Geekbench 5.0.1 ( https://browser.geekbench.com/v5/cpu/2472702 )
  -Single-Core Score  : 316
  -Multi-Core score   : 414

System Benchmarks Index Score  x1                                       250.8
System Benchmarks Index Score  x2                                       348.4

1.2- VPS 9 (L.A.)
https://post4vps.com/Thread-Virmach-VPS-9-Review
Processor  : Intel Xeon E312xx (Sandy Bridge, IBRS update)
CPU cores  : 2 @ 2499.998 MHz
Kernel          : ?

Geekbench 5.3.1 ( https://browser.geekbench.com/v5/cpu/5391725 )
  -Single-Core Score  : 376
  -Multi-Core score   : 646

1.3- VPS 9 (Seattle)
https://post4vps.com/Thread-Virmach-VPS-...ew-Seattle
Processor  : Intel® Xeon® CPU E5-2670 v2 @ 2.50GHz
CPU Cores       : 2@ 2499.998 MHz
Kernel          : 3.10.0-1062.4.1.el7.x86_64

Geekbench 5.1.0 ( https://browser.geekbench.com/v5/cpu/2077291 )
  -Single-Core Score  : 387
  -Multi-Core score   : 702

System Benchmarks Index Score  x2                                       664.6

1.4- VPS 9 (Phoenix)
Unpublished__data
Processor  : Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
CPU cores  : 2 @ 2399.996 MHz
Kernel        : 4.18.0-147.8.1.el8_1.x86_64

Scores for GeekBench 5.1.1, 5.1.0 and 4.3.1, in single and multi-core tests:
>> GeekBench 5.1.1 :  368 ;   572
>> GeekBench 5.1.0 :  392 ;   651
>> GeekBench 4.3.1 : 1797 ;  2701

UnixBench (2)
System Benchmarks Index Score  x1                                      479.3
System Benchmarks Index Score  x2                                          -

1.5- VPS 9 (Buffalo)
Unpublished__data
Processor    : Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
CPU Cores    : 2 @ 2499.998 MHz
Kernel        : 4.18.0-240.1.1.el8_3.x86_64

Scores for GeekBench 5.1.1, 5.1.0 and 4.3.1, in single and multi-core tests:
>> GeekBench 5.1.1 :  367 ;   646 (https://browser.geekbench.com/v5/cpu/6320829)
>> GeekBench 5.1.0 :  396 ;   654 (https://browser.geekbench.com/v5/cpu/6321825)
>> GeekBench 4.3.1 : 1931 ;  2995 (https://browser.geekbench.com/v4/cpu/16028249)

UnixBench (2)
System Benchmarks Index Score  x1                                      436.0
System Benchmarks Index Score  x2                                      837.7

2- Libvirt CPU Models :
Libvirt supports three ways to configure guests CPU models:
  • Host passthrough - In this mode, the host CPU model, stepping, and features are faithfully passed to the guest while certain CPU features will still be filtered out by the hypervisor. This guest CPU model is the recommended CPU to use when live migration isn't needed.
    This mode was used for VPS-9 @Phoenix, @Buffalo and @Seattle.
  • Named models - This is a set of predefined named CPU models supported by QEMU and that correspond to specific generations of CPUs released by hardware vendors. These named CPUs are typically used when live migration between hosts with differing hardware is mission critical.
    This mode was used for VPS-9 @Atlanta.
  • Host model - This is the default mode of libvirt that tries to use QEMU's "Named models" to automatically chose a CPU model that is as close to the host CPU as possible while adding any extra flag optimising both host CPU matching and VM live migration.
    This mode was used for VPS-9 @L.A.

Armed with the above information in mind, we're now in a position to fine-tune our virt-install command even further, like so:
virt-install  --virt-type=kvm --hvm --arch=x86_64 --machine=pc \
--name=centos8Phoenix \
--os-type=linux --os-variant=centos8 \
--ram=8192 --vcpus=2 --cpu host-passthrough \
--disk path=/media/phoenix.img \
--import

Again, why is it important to add the cpu flag in the command?.. Answer, to avoid ending up with a host-model CPU (on my system, it's Model: IvyBridge-IBRS that resolve to : Intel Xeon E3-12xx v2 inside the guest) instead of the more powerful host-passthrough model that will resolve to your own host CPU (ie Intel® Core™ i5-3470 CPU @ 3.20GHz on my KVM system.)

In the next post we'll talk a bit about disk I/O fine-tuning.
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#6
(02-05-2021, 03:11 PM)fChk Wrote: In the next post we'll talk a bit about disk I/O fine-tuning.

Just before tackling the disk I/O part, I would want to still ponder over the CPU performance data of VPS-9, and for that I do need some data points from @sohamb03 and @sagher.

To be able to make a point, I would need the output of this command on their systems:
grep flags -m1 /proc/cpuinfo

This command will spit all the enabled flags on their vCPUs. So please: @sohamb03 and @sagher, can you please share your findings here?

Just be aware that I generally don't publish those flags in my posts verbatim; it's generally a bad idea because in conjunction with your IP, someone with a motive can cause harm!..

So, if you don't want to publish that here too, can you please PM it to me ?.. Thanks!
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#7
Apologies for the late reply @fChk, I hardly check my alerts. If it was not for the forum getting struck on mobile view today when I logged in, I guess you'd have waited months for a reply unless I spotted this thread. Tongue

Here's the information you requested:

[sohamb03@sohamb03 ~]$ grep flags -m1 /proc/cpuinfo
flags           : [redacted]

Cheers!
Sayan Bhattacharyya,

Heartiest thanks to Post4VPS and Virmach for my wonderful VPS 9!
#8
(03-03-2021, 04:29 AM)sohamb03 Wrote: Apologies for the late reply @fChk, I hardly check my alerts. If it was not for the forum getting struck on mobile view today when I logged in, I guess you'd have waited months for a reply unless I spotted this thread. Tongue

Here's the information you requested:

[sohamb03@sohamb03 ~]$ grep flags -m1 /proc/cpuinfo
flags           : [ ........vCPU_Extensions_..........]

Cheers!

Thanks for the input @sohamb03!

I think you might need to remove those flags from your post now :-)

The absence of some flags leaks some information about potential CPU vulnerabilities.

Thanks again!
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#9
(03-03-2021, 05:28 PM)fChk Wrote: Thanks for the input @sohamb03!

I think you might need to remove those flags from your post now :-)

The absence of some flags leaks some information about potential CPU vulnerabilities.

Thanks again!

Done that. Thanks for the information! 

If possible, could you please elaborate about these vulnerabilities?
Sayan Bhattacharyya,

Heartiest thanks to Post4VPS and Virmach for my wonderful VPS 9!
#10
(03-04-2021, 05:22 AM)sohamb03 Wrote: Done that. Thanks for the information! 

If possible, could you please elaborate about these vulnerabilities?

I will in the next post in this thread :-)

For now, just login into your VPS and type the following commands:
cd /sys/devices/system/cpu/vulnerabilities/
grep . *

The last command will output your CPU vulnerabilities and/or their applied mitigations if they exist.

Spoiler alert: It is!... The least I say the better :-)
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
Pages (2):


Possibly Related Threads…
Thread
Author
Replies
Views
Last Post
2,357
04-07-2017, 12:19 PM
Last Post: Han105
3,248
10-25-2016, 11:35 AM
Last Post: Anik

person_pin_circle Users browsing this thread: 1 Guest(s)
Sponsors: VirMach - Host4Fun - CubeData - Evolution-Host - HostDare - Hyper Expert - Shadow Hosting - Bladenode - Hostlease - RackNerd - ReadyDedis - Limitless Hosting