06-12-2020, 10:59 AM
@sohamb03 Thanks for your patience and swift responses!... Rather rare in this Forum! :-)
So, here is my comments for your storage-related results. I'll still be asking for few more inputs at the end of this post.
This is OKay-ish compared to the previous 802.333 MB/s. @deanhills' Dallas VPS-9 scored 535 MB/s for his DD's buffered write speed. Thus, you effectively take the cake :-)
By the way, why not replace the "Disk Speed" label in your script for what it is really; ie dd's buffered write speed (with 64K block-size *1024.) OR, if you don't want to mention dd, just "Buffered sequential write speed". Disk speed leave the casual user of your script clueless, especially if he can't "read" Bash script syntax.
Ok!.. here you have increased the block size to 1M instead of 64K, hence the difference in the write speed (662 MB/s vs 813 MB/s.) It always increase when doing a throughput seq. write test, but this jump is a bit off!
For dd's read speed, we don't have published data in this forum to compare them with, except mine that I didn't publish yet; but I'm using 64K block-size (same as in the write test.) So the cache read is very conservative, but the buffered read is GREAT if it doesn't change too often. Phoenix-VPS-9 fluctuates between 1.3 GB/s as an all-time high and 115 MB/s as all-time low, when using 'none' as the guest I/O Scheduler [Yes! there is this too as yet another factor to keep in mind!]
I think, you can remove this as of now, given that you do know WHY now!.. Leaving it there adds to the confusion of the casual reader. Besides, the only reason why you didn't know about it is that you were lacking knowledge as to how Linux kernel manages block devices and how loop devices via loop-mounted filesystem relate to them.
For the HDPARM results, see for yourself how they compare with the already published data on the forum:
HR's NAT-VPS virtual disk is backed by a powerful NVMe-based SSD (based on @Neoon data provided on his NanoKVM homepage) but its I/O is capped at around 500 MB/s (mine is on a regular SSD capped to 105 MB/s.) Thus, you can infer where I'm going with this logic :-) Yes, maybe VirMach is using an NVMe-based SSD on your Atlanta node.. OR (more probably) they are simply using writeback-caching at the VPS's host level, in which case, both the host page cache and the disk write cache are enabled for the guest, which ultimately would explain the near-gigabytes read/write storage performance.
Helas, the economic logic make me lean more towards the second hypothesis.
For the IOPing:
Again, see how HR's NAT-VPS performs vs yours: big difference.
For the latency, I didn't include your results as they were performed inside the loop-mounted /tmp folder. I need either /dev/vda or your sudoer home directory (directory's latency with Ioping is a lot faster than the block device; and the results in the table above are the ones done from folders.)
Anyway, what I'm sure about at this point is that your VPS has something unusual (compared to the other published data here) configuration-wise. So, to complete the picture, can you provide the following bits of information:
That should be all :-)
Starting next-weekend, I'll have more time available and, hopefully, will start a series of threads/posts where I'll lay out my understanding to this whole virtualization thing, while publishing my own VPS-9 data and trying to make sense of them as far as I can.
The motivation of all this is that I've noticed that all the review section is way too cheerfull and not as objectif as I would like it to be--with the exception of some notable ones of course. Also, people's lack of the relevant knowledge generally contributes to a misleading way of interpreting the data... That has to change... I hope...
So, here is my comments for your storage-related results. I'll still be asking for few more inputs at the end of this post.
Code: (Select All)
[sohamb03@google ~]$ ./bench.sh -io
Disk Speed
----------
I/O (1st run) : 547 MB/s
I/O (2nd run) : 708 MB/s
I/O (3rd run) : 731 MB/s
Average I/O : 662 MB/s
This is OKay-ish compared to the previous 802.333 MB/s. @deanhills' Dallas VPS-9 scored 535 MB/s for his DD's buffered write speed. Thus, you effectively take the cake :-)
By the way, why not replace the "Disk Speed" label in your script for what it is really; ie dd's buffered write speed (with 64K block-size *1024.) OR, if you don't want to mention dd, just "Buffered sequential write speed". Disk speed leave the casual user of your script clueless, especially if he can't "read" Bash script syntax.
Code: (Select All)
[sohamb03@google ~]$ ./disk-speed-beta.sh
Testing hard drive write speed.
Pass 1... Pass 2... Pass 3... 813 MB/s
Testing hard drive read speed with caching.
Pass 1... Pass 2... Pass 3... 2.6 GB/s
Testing hard drive read speed without caching.
Pass 1... Pass 2... Pass 3... 859 MB/s
Ok!.. here you have increased the block size to 1M instead of 64K, hence the difference in the write speed (662 MB/s vs 813 MB/s.) It always increase when doing a throughput seq. write test, but this jump is a bit off!
For dd's read speed, we don't have published data in this forum to compare them with, except mine that I didn't publish yet; but I'm using 64K block-size (same as in the write test.) So the cache read is very conservative, but the buffered read is GREAT if it doesn't change too often. Phoenix-VPS-9 fluctuates between 1.3 GB/s as an all-time high and 115 MB/s as all-time low, when using 'none' as the guest I/O Scheduler [Yes! there is this too as yet another factor to keep in mind!]
Quote:(PS: Here I encountered a strange problem. I'm not an expert at all this; since I don't deal with hardware at all. For some reason, my system seems to be missing "/dev/sda" and I've the least idea why, so I performed the upcoming tests on "/dev/loop0". No idea if that makes a difference but nevermind. If I were to make a wild guess, I got CentOS 7 installed from ISO since I needed the default partitions for some purpose ... maybe that's why.)
I think, you can remove this as of now, given that you do know WHY now!.. Leaving it there adds to the confusion of the casual reader. Besides, the only reason why you didn't know about it is that you were lacking knowledge as to how Linux kernel manages block devices and how loop devices via loop-mounted filesystem relate to them.
For the HDPARM results, see for yourself how they compare with the already published data on the forum:
Code: (Select All)
----------------------------------------------------------------
| HdParm | cache_r | buffered_r | direct_r | Tester |
----------------------------------------------------------------
| VPS-16 | 3.7 GB/s | 337.82 MB/s | 324.3 MB/s |HiddenRefuge|
----------------------------------------------------------------
| NAT-VPS | 16.1 GB/s | 464.97 MB/s | 465.1 MB/s |HiddenRefuge|
----------------------------------------------------------------
| VPS-4 | 11.1 GB/s | 105.53 MB/s | 91.7 MB/s | rudra |
----------------------------------------------------------------
| VPS-6 | 7.3 GB/s | 213.23 MB/s | 130.2 MB/s | chanalku91 |
----------------------------------------------------------------
|VPS-9-At | 3.8 GB/s |1113.88 MB/s |999.13 MB/s | sohamb03 |
----------------------------------------------------------------
Helas, the economic logic make me lean more towards the second hypothesis.
For the IOPing:
Code: (Select All)
-----------------------------------------------------------------------
| IOPing |lat(us)| seek_rate | seq_r_rate | Tester |
-----------------------------------------------------------------------
| VPS-16 | 3230 | 1.01 k iops, 975.4 us | 137.3 MiB/s |HiddenRefuge|
-----------------------------------------------------------------------
| NAT-VPS | 267 | 5.56 k iops, 176.0 us | 464.9 MiB/s |HiddenRefuge|
-----------------------------------------------------------------------
| VPS-4 | 3540 | 199 iops, 5.0 ms | 120.7 MiB/s | rudra |
-----------------------------------------------------------------------
| VPS-6 | 405 | 2.02 k iops, 488.7 us | 85.6 MiB/s | chanalku91 |
-----------------------------------------------------------------------
|VPS-9-At | | 1.11 k iops, 781.8 us | 238.3 MiB/s | sohamb03 |
-----------------------------------------------------------------------
For the latency, I didn't include your results as they were performed inside the loop-mounted /tmp folder. I need either /dev/vda or your sudoer home directory (directory's latency with Ioping is a lot faster than the block device; and the results in the table above are the ones done from folders.)
Anyway, what I'm sure about at this point is that your VPS has something unusual (compared to the other published data here) configuration-wise. So, to complete the picture, can you provide the following bits of information:
Code: (Select All)
ioping -c 15 /home/mysudoer
# needs root
ioping -c 15 /dev/vda
# to know what's backing your loop0 device
cat /sys/block/loop0/loop/backing_file
# may be also
cat /etc/fstab
# to know your CPU specs
lscpu
#
lspci -m
# to know which I/O schedular you're using
cat /sys/block/vda/queue/scheduler
#
egrep 'MemTotal|MemFree|Buffers|^Cached|^Swap|Huge' /proc/meminfo
# check if you have tuned running
systemctl status tuned
# if so, then check what profile is it using
tuned-adm active
That should be all :-)
(06-11-2020, 04:11 PM)sohamb03 Wrote: BTW, here's something I wanted to ask. As I said, I'm more into software than hardware, hence, I can't make out why the sequential rate is so high on /tmp. Can you please explain to me?Loop-mounted devices are dealt with locally at your VPS's kernel level hence the high performance. Your virtual disk is a virtualized block device that involves the entire Host/KVM/QEMU stack communicating via VirtIO device drivers. It's a thick multi-layers interface that penalize both the latency and the throughput performance, depending on how the caching type and the I/O mode is set by the deployer.
Code: (Select All)[root@google ~]# ioping -RL /tmp
--- /tmp (ext4 /dev/loop0) ioping statistics ---
7.31 k requests completed in 2.52 s, 1.79 GiB read, 2.90 k iops, 725.9 MiB/s
generated 7.32 k requests in 3.00 s, 1.79 GiB, 2.44 k iops, 609.6 MiB/s
min/avg/max/mdev = 203.7 us / 344.4 us / 26.5 ms / 506.1 us
Thanks and regards,
Starting next-weekend, I'll have more time available and, hopefully, will start a series of threads/posts where I'll lay out my understanding to this whole virtualization thing, while publishing my own VPS-9 data and trying to make sense of them as far as I can.
The motivation of all this is that I've noticed that all the review section is way too cheerfull and not as objectif as I would like it to be--with the exception of some notable ones of course. Also, people's lack of the relevant knowledge generally contributes to a misleading way of interpreting the data... That has to change... I hope...