arrow_upward

Pages (2):
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Virmach  VPS 9 (Atlanta) - My Dream VPS!
#11
@sohamb03  Thanks for your patience and swift responses!... Rather rare in this Forum! :-)

So, here is my comments for your storage-related results. I'll still be asking for few more inputs at the end of this post.

[sohamb03@google ~]$ ./bench.sh -io
Disk Speed
----------
I/O (1st run)   : 547 MB/s
I/O (2nd run)   : 708 MB/s
I/O (3rd run)   : 731 MB/s
Average I/O     : 662 MB/s

This is OKay-ish compared to the previous 802.333 MB/s. @deanhills' Dallas VPS-9 scored 535 MB/s for his DD's buffered write speed. Thus, you effectively take the cake :-)

By the way, why not replace the "Disk Speed" label in your script for what it is really; ie dd's buffered write speed (with 64K block-size *1024.) OR, if you don't want to mention dd, just "Buffered sequential write speed". Disk speed leave the casual user of your script clueless, especially if he can't "read" Bash script syntax.

[sohamb03@google ~]$ ./disk-speed-beta.sh
Testing hard drive write speed.
Pass 1... Pass 2... Pass 3... 813 MB/s
Testing hard drive read speed with caching.
Pass 1... Pass 2... Pass 3... 2.6 GB/s
Testing hard drive read speed without caching.
Pass 1... Pass 2... Pass 3... 859 MB/s

Ok!.. here you have increased the block size to 1M instead of 64K, hence the difference in the write speed (662 MB/s vs 813 MB/s.) It always increase when doing a throughput seq. write test, but this jump is a bit off!

For dd's read speed, we don't have published data in this forum to compare them with, except mine that I didn't publish yet; but I'm using 64K block-size (same as in the write test.) So the cache read is very conservative, but the buffered read is GREAT if it doesn't change too often. Phoenix-VPS-9 fluctuates between 1.3 GB/s as an all-time high and 115 MB/s as all-time low, when using 'none' as the guest I/O Scheduler [Yes!  there is this too as yet another factor to keep in mind!]

Quote:(PS: Here I encountered a strange problem. I'm not an expert at all this; since I don't deal with hardware at all. For some reason, my system seems to be missing "/dev/sda" and I've the least idea why, so I performed the upcoming tests on "/dev/loop0". No idea if that makes a difference but nevermind. If I were to make a wild guess, I got CentOS 7 installed from ISO since I needed the default partitions for some purpose ... maybe that's why.)

I think, you can remove this as of now, given that you do know WHY now!.. Leaving it there adds to the confusion of the casual reader. Besides, the only reason why you didn't know about it is that you were lacking knowledge as to how Linux kernel manages block devices and how loop devices via loop-mounted filesystem relate to them.


For the HDPARM results, see for yourself how they compare with the already published data on the forum:
----------------------------------------------------------------
| HdParm  |   cache_r  | buffered_r  | direct_r   |   Tester   |
----------------------------------------------------------------
|  VPS-16 |  3.7 GB/s  | 337.82 MB/s | 324.3 MB/s |HiddenRefuge|
----------------------------------------------------------------
| NAT-VPS | 16.1 GB/s  | 464.97 MB/s | 465.1 MB/s |HiddenRefuge|
----------------------------------------------------------------
|  VPS-4  | 11.1 GB/s  | 105.53 MB/s |  91.7 MB/s |   rudra    |
----------------------------------------------------------------
|  VPS-6  |  7.3 GB/s  | 213.23 MB/s | 130.2 MB/s | chanalku91 |
----------------------------------------------------------------
|VPS-9-At |  3.8 GB/s  |1113.88 MB/s |999.13 MB/s |  sohamb03  |
----------------------------------------------------------------
HR's NAT-VPS virtual disk is backed by a powerful NVMe-based SSD (based on @Neoon data provided on his NanoKVM homepage) but its I/O is capped at around 500 MB/s (mine is on a regular SSD capped to 105 MB/s.) Thus, you can infer where I'm going with this logic :-) Yes, maybe VirMach is using an NVMe-based SSD on your Atlanta node.. OR (more probably) they are simply using writeback-caching at the VPS's host level, in which case, both the host page cache and the disk write cache are enabled for the guest, which ultimately would explain the near-gigabytes read/write storage performance.

Helas, the economic logic make me lean more towards the second hypothesis.

For the IOPing:
-----------------------------------------------------------------------
| IOPing  |lat(us)|       seek_rate       |  seq_r_rate  |   Tester   |
-----------------------------------------------------------------------
|  VPS-16 |  3230 | 1.01 k iops, 975.4 us |  137.3 MiB/s |HiddenRefuge|
-----------------------------------------------------------------------
| NAT-VPS |   267 | 5.56 k iops, 176.0 us |  464.9 MiB/s |HiddenRefuge|
-----------------------------------------------------------------------
|  VPS-4  |  3540 |    199 iops,  5.0 ms  |  120.7 MiB/s |   rudra    |
-----------------------------------------------------------------------
|  VPS-6  |   405 | 2.02 k iops, 488.7 us |   85.6 MiB/s | chanalku91 |
-----------------------------------------------------------------------
|VPS-9-At |       | 1.11 k iops, 781.8 us |  238.3 MiB/s |  sohamb03  |
-----------------------------------------------------------------------
Again, see how HR's NAT-VPS performs vs yours: big difference.

For the latency, I didn't include your results as they were performed inside the loop-mounted /tmp folder. I need either /dev/vda or your sudoer home directory (directory's latency with Ioping is a lot faster than the block device; and the results in the table above are the ones done from folders.)

Anyway, what I'm sure about at this point is that your VPS has something unusual (compared to the other published data here) configuration-wise. So, to complete the picture, can you provide the following bits of information:
ioping -c 15 /home/mysudoer
# needs root
ioping -c 15 /dev/vda
# to know what's backing your loop0 device
cat /sys/block/loop0/loop/backing_file
# may be also
cat /etc/fstab
# to know your CPU specs
lscpu
#
lspci -m
# to know which I/O schedular you're using
cat /sys/block/vda/queue/scheduler
#
egrep 'MemTotal|MemFree|Buffers|^Cached|^Swap|Huge' /proc/meminfo
# check if you have tuned running
systemctl status tuned
# if so, then check what profile is it using
tuned-adm active

That should be all :-)

(06-11-2020, 04:11 PM)sohamb03 Wrote: BTW, here's something I wanted to ask. As I said, I'm more into software than hardware, hence, I can't make out why the sequential rate is so high on /tmp. Can you please explain to me?

[root@google ~]# ioping -RL /tmp

--- /tmp (ext4 /dev/loop0) ioping statistics ---
7.31 k requests completed in 2.52 s, 1.79 GiB read, 2.90 k iops, 725.9 MiB/s
generated 7.32 k requests in 3.00 s, 1.79 GiB, 2.44 k iops, 609.6 MiB/s
min/avg/max/mdev = 203.7 us / 344.4 us / 26.5 ms / 506.1 us

Thanks and regards,
Loop-mounted devices are dealt with locally at your VPS's kernel level hence the high performance. Your virtual disk is a virtualized block device that involves the entire Host/KVM/QEMU stack communicating via VirtIO device drivers. It's a thick multi-layers interface that penalize both the latency and the throughput performance, depending on how the caching type and the I/O mode is set by the deployer.

Starting next-weekend, I'll have more time available and, hopefully, will start a series of threads/posts where I'll lay out my understanding to this whole virtualization thing, while publishing my own VPS-9 data and trying to make sense of them as far as I can.

The motivation of all this is that I've noticed that all the review section is way too cheerfull and not as objectif as I would like it to be--with the exception of some notable ones of course. Also, people's lack of the relevant knowledge generally contributes to a misleading way of interpreting the data... That has to change... I hope...
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#12
it used to be my dream vps to when i first on this forum . i apply it on june 2017 and i got the Phoenix location (not to be confused with agus indihome package . it is a city in arizona) . and it worked great . never suddenly shut down but at early 2018 i fail to get 20 post and dropped my vps 9 but i get it again in the next giveaway but i got the seattle one . the seattle one sometime suddenly shut down and i need to contact the admin to turn it back on

when i use VPS 9 i use when it still use OpenVZ Virtualization . due to eol of openvz 6 they switch it to KVM Lite . i did a stupid thing on vps 9 before . i try to upgrade to ubuntu 18.04 and what i ended up is bricked the system and request for reinstall . luckily vps 9 already use KVM so no worry about that problem . but i already change my vps to vps 5
Terminal
humanpuff69@FPAX:~$ Thanks To Shadow Hosting And Post4VPS for VPS 5
#13
Thanks @fChk

(06-12-2020, 10:59 AM)fChk Wrote: So, here is my comments for your storage-related results. I'll still be asking for few more inputs at the end of this post.

[sohamb03@google ~]$ ./bench.sh -io
Disk Speed
----------
I/O (1st run)   : 547 MB/s
I/O (2nd run)   : 708 MB/s
I/O (3rd run)   : 731 MB/s
Average I/O     : 662 MB/s

This is OKay-ish compared to the previous 802.333 MB/s. @deanhills' Dallas VPS-9 scored 535 MB/s for his DD's buffered write speed. Thus, you effectively take the cake :-)

Actually I did another test inside of my sudo-ers home directory .. and the results are fluctuating by a bit again. Big Grin

[sohamb03@google ~]$ ./bench.sh -io
Buffered Sequential Write Speed
-------------------------------
I/O (1st run)   : 722 MB/s
I/O (2nd run)   : 726 MB/s
I/O (3rd run)   : 700 MB/s
Average I/O     : 716 MB/s




(06-12-2020, 10:59 AM)fChk Wrote: By the way, why not replace the "Disk Speed" label in your script for what it is really; ie dd's buffered write speed (with 64K block-size *1024.) OR, if you don't want to mention dd, just "Buffered sequential write speed". Disk speed leave the casual user of your script clueless, especially if he can't "read" Bash script syntax.

Yeah, I've implemented that for now, but I'll be integrating disk-speed-beta.sh into the main benchmarking script soon .. so that'll clearly differentiate between what's what.





(06-12-2020, 10:59 AM)fChk Wrote: I think, you can remove this as of now, given that you do know WHY now!.. Leaving it there adds to the confusion of the casual reader. Besides, the only reason why you didn't know about it is that you were lacking knowledge as to how Linux kernel manages block devices and how loop devices via loop-mounted filesystem relate to them.


Removed it .. and yeah now I know what they are, thanks to the detailed explanation by you and HR. :-)


And now .. the other results that you wanted:
[root@google ~]# ioping -c 15 /home/sohamb03
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=1 time=3.27 ms (warmup)
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=2 time=495.8 us
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=3 time=1.12 ms
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=4 time=527.2 us
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=5 time=589.5 us
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=6 time=586.2 us
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=7 time=491.6 us (fast)
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=8 time=2.04 ms (slow)
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=9 time=2.04 ms (slow)
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=10 time=2.88 ms (slow)
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=11 time=1.36 ms
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=12 time=396.1 us (fast)
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=13 time=2.85 ms (slow)
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=14 time=4.32 ms (slow)
4 KiB <<< /home/sohamb03 (xfs /dev/dm-2): request=15 time=1.26 ms

--- /home/sohamb03 (xfs /dev/dm-2) ioping statistics ---
14 requests completed in 21.0 ms, 56 KiB read, 667 iops, 2.61 MiB/s
generated 15 requests in 14.0 s, 60 KiB, 1 iops, 4.29 KiB/s
min/avg/max/mdev = 396.1 us / 1.50 ms / 4.32 ms / 1.14 ms
[root@google ~]# ioping -c 15 /dev/vda
4 KiB <<< /dev/vda (block device 78 GiB): request=1 time=2.74 ms (warmup)
4 KiB <<< /dev/vda (block device 78 GiB): request=2 time=2.71 ms
4 KiB <<< /dev/vda (block device 78 GiB): request=3 time=941.8 us
4 KiB <<< /dev/vda (block device 78 GiB): request=4 time=7.72 ms
4 KiB <<< /dev/vda (block device 78 GiB): request=5 time=3.19 ms
4 KiB <<< /dev/vda (block device 78 GiB): request=6 time=3.30 ms
4 KiB <<< /dev/vda (block device 78 GiB): request=7 time=954.2 us (fast)
4 KiB <<< /dev/vda (block device 78 GiB): request=8 time=2.44 ms
4 KiB <<< /dev/vda (block device 78 GiB): request=9 time=2.88 ms
4 KiB <<< /dev/vda (block device 78 GiB): request=10 time=447.9 us (fast)
4 KiB <<< /dev/vda (block device 78 GiB): request=11 time=468.4 us (fast)
4 KiB <<< /dev/vda (block device 78 GiB): request=12 time=773.7 us (fast)
4 KiB <<< /dev/vda (block device 78 GiB): request=13 time=2.90 ms
4 KiB <<< /dev/vda (block device 78 GiB): request=14 time=4.31 ms
4 KiB <<< /dev/vda (block device 78 GiB): request=15 time=562.2 us (fast)

--- /dev/vda (block device 78 GiB) ioping statistics ---
14 requests completed in 33.6 ms, 56 KiB read, 416 iops, 1.63 MiB/s
generated 15 requests in 14.0 s, 60 KiB, 1 iops, 4.29 KiB/s
min/avg/max/mdev = 447.9 us / 2.40 ms / 7.72 ms / 1.92 ms

[root@google ~]# cat /sys/block/loop0/loop/backing_file
/usr/.tempdisk

[root@google ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu Mar 19 12:38:24 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=1638f137-52ce-4b4e-828b-553a54f64459 /boot                   xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/usr/.tempdisk /tmp ext4 loop,rw,noexec,nosuid,nodev,nofail 0 0
/tmp /var/tmp none bind 0 0

[root@google ~]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             2
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 13
Model name:            QEMU Virtual CPU version (cpu64-rhel6)
Stepping:              3
CPU MHz:               2499.998
BogoMIPS:              4999.99
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              4096K
NUMA node0 CPU(s):     0,1
Flags:                 fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm nopl eagerfpu pni cx16 hypervisor lahf_lm

# Had to install `pciutils` for this one

[root@google ~]# lspci -m
00:00.0 "Host bridge" "Intel Corporation" "440FX - 82441FX PMC [Natoma]" -r02 "Red Hat, Inc." "Qemu virtual machine"
00:01.0 "ISA bridge" "Intel Corporation" "82371SB PIIX3 ISA [Natoma/Triton II]" "Red Hat, Inc." "Qemu virtual machine"
00:01.1 "IDE interface" "Intel Corporation" "82371SB PIIX3 IDE [Natoma/Triton II]" -p80 "Red Hat, Inc." "Qemu virtual machine"
00:01.2 "USB controller" "Intel Corporation" "82371SB PIIX3 USB [Natoma/Triton II]" -r01 "Red Hat, Inc." "QEMU Virtual Machine"
00:01.3 "Bridge" "Intel Corporation" "82371AB/EB/MB PIIX4 ACPI" -r03 "Red Hat, Inc." "Qemu virtual machine"
00:02.0 "VGA compatible controller" "Cirrus Logic" "GD 5446" "Red Hat, Inc." "QEMU Virtual Machine"
00:03.0 "Ethernet controller" "Red Hat, Inc." "Virtio network device" "Red Hat, Inc." "Device 0001"
00:04.0 "SCSI storage controller" "Red Hat, Inc." "Virtio block device" "Red Hat, Inc." "Device 0002"
00:05.0 "RAM memory" "Red Hat, Inc." "Virtio memory balloon" "Red Hat, Inc." "Device 0005"

[root@google ~]# cat /sys/block/vda/queue/scheduler
[mq-deadline] kyber none

[root@google ~]# egrep 'MemTotal|MemFree|Buffers|^Cached|^Swap|Huge' /proc/meminfo
MemTotal:        8009116 kB
MemFree:         1175680 kB
Buffers:         1755968 kB
Cached:          1666652 kB
SwapCached:          200 kB
SwapTotal:       8179708 kB
SwapFree:        8164092 kB
AnonHugePages:    290816 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

# Yeah, tuned is running

[root@google ~]# systemctl status tuned
● tuned.service - Dynamic System Tuning Daemon
  Loaded: loaded (/usr/lib/systemd/system/tuned.service; enabled; vendor preset: enabled)
  Active: active (running) since Mon 2020-05-25 09:30:05 EDT; 2 weeks 4 days ago
    Docs: man:tuned(8)
          man:tuned.conf(5)
          man:tuned-adm(8)
Main PID: 292251 (tuned)
  CGroup: /system.slice/tuned.service
          └─292251 /usr/bin/python2 -Es /usr/sbin/tuned -l -P

[root@google ~]# tuned-adm active
Current active profile: virtual-guest

That's all. 


Again, thanks a lot for the explanations ... really appreciate! If you need any other information, feel free to tell me. Smile

Regards,
Sayan Bhattacharyya,

Heartiest thanks to Post4VPS and Virmach for my wonderful VPS 9!
#14
@sohamb03

All clear!.. Nothing in your input is unexpected, except, may be, the buffer + cache values that are unusually high compared to what I get:
Buffers:         1755968 kB
Cached:          1666652 kB

Worth investigating further by checking the Linux kernel's virtual memory (VM) subsystem, ie /proc/sys/vm/. But, I won't do it.

(06-12-2020, 02:06 PM)sohamb03 Wrote: Yeah, I've implemented that for now, but I'll be integrating disk-speed-beta.sh into the main benchmarking script soon .. so that'll clearly differentiate between what's what.
Don't forget to add dd's direct write and synchronous write tests too.

(06-12-2020, 10:59 AM)fChk Wrote: Starting next-weekend, I'll have more time available and, hopefully, will start a series of threads/posts where I'll lay out my understanding to this whole virtualization thing, while publishing my own VPS-9 data and trying to make sense of them as far as I can.
Unfortunately, something came up and I won't be able to continue posting in this forum. Phoenix-based VPS-9 will be restituted by the end of the month as a result.

Good luck to you all!
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#15
Oh well, that's sad to know @fChk that you will be temporarily unavailable. I hope you can sort out whatever problems you are facing soon, and come back to post on the forums.

I do not know what exactly you are looking for in `/proc/sys/vm/`, so I'll wait for your next reply. Honestly, I enjoyed this discussion with you and would like to continue it in the future whenever possible.

Again, my best wishes for you! Smile
Sayan Bhattacharyya,

Heartiest thanks to Post4VPS and Virmach for my wonderful VPS 9!
#16
(06-11-2020, 01:53 PM)ikk157 Wrote: It specially caught my attention that you mentioned that Deanhills informed you that the previous holder of VPS 9 Seattle had negative feedback on it (never knew that to be honest). That, if anyone here doesn’t already know, is the same VPS i had currently.

I have to say that you're wrong!.. and here is why?

You and the previous VPS older are both located in Seattle, but ever since the transition to KVM, your KVM VPS has nothing to do with @Golden's OVZ-based VPS 9 Virmach Seattle.

So, whatever issue you have has nothing to do with his own sets of problems. i've just checked that review and it looked like a misconfig of the VPS from the start:
System Info
-----------
Processor       : Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
CPU Cores       : 2
Frequency       : 2401.000 MHz
Memory          : 8192 MB
Swap            : 0 MB
Uptime          : 16:28,
OS              : Ubuntu 14.04.5 LTS
Arch            : i686 (32 Bit)
Kernel          : 2.6.32-042stab127.2
Hostname        : Post2Host-Seattle

Starting from using a 32Bit OS template (on a 64Bit VPS host!!) but recognizing 8GB RAM nevertheless, which point to the possibility of PAE being enabled.

Just from this fact alone, I would say that the OVZ-based Seattle VPS-9 needed a 64-bit template re-install from the get go.

And if you pay attention to that OVZ VPS host's processor, you would note that it was an Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz, while your own KVM-based Seattle VPS has :  Intel® Xeon® CPU E5-2670 v2 @ 2.50GHz.

Conclusion: It's not even the same VPS host that's used, let alone the fact that an OVZ-container has nothing to do with a KVM-VM performance-wise when running inside the same VPS host!
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)
#17
(02-21-2021, 12:58 PM)fChk Wrote: Starting from using a 32Bit OS template (on a 64Bit VPS host!!) but recognizing 8GB RAM nevertheless, which point to the possibility of PAE being enabled.

Just from this fact alone, I would say that the OVZ-based Seattle VPS-9 needed a 64-bit template re-install from the get go.

Yep, it was really pathetic to see that review I wonder how he was complaining of issues when he was using a 32 bit OS architecture. I wonder if that VPS would every perform well, just like you said on a 64 bit VPS host. 

Nevertheless, this post is to let everyone know that VPS 9 Atlanta has been performing really well, I had months of uptime until a OS reinstall a month ago and a network issue which required a reboot, about a week ago. Still it's performance is unmatched and perfectly suitable for my use case, and all my applications are performing really well on this beast. 

A shout out to VirMach too! Smile
Sayan Bhattacharyya,

Heartiest thanks to Post4VPS and Virmach for my wonderful VPS 9!
#18
(02-28-2021, 11:01 AM)sohamb03 Wrote: Yep, it was really pathetic to see that review I wonder how he was complaining of issues when he was using a 32 bit OS architecture. I wonder if that VPS would every perform well, just like you said on a 64 bit VPS host. 

Nevertheless, this post is to let everyone know that VPS 9 Atlanta has been performing really well, I had months of uptime until a OS reinstall a month ago and a network issue which required a reboot, about a week ago. Still it's performance is unmatched and perfectly suitable for my use case, and all my applications are performing really well on this beast. 

A shout out to VirMach too! Smile

That's good to hear! Are you running a web server on that? I wonder how many of these VPS9s are used as web servers. Right now I have the Seattle servers and trying to get running things with CyberPanel. I'm getting all sorts of problems which I haven't had before.


~ Be yourself everybody else is taken ~




#19
(02-28-2021, 11:14 AM)xdude Wrote: That's good to hear! Are you running a web server on that? I wonder how many of these VPS9s are used as web servers. Right now I have the Seattle servers and trying to get running things with CyberPanel. I'm getting all sorts of problems which I haven't had before.

I run a heck lot of things on the VPS, web server for sure. It houses some of my docker applications, then a Xolentum seed node about 8 Discord bots, a music node for my bot. Then there's VestaCP the panel I use (I use the CentOS 7 fork btw https://github.com/madeITBelgium/vesta) so Nginx with Apache, all of my beta websites house on that server, and a lot more. 

With so many things though my RAM usage is about 50-80% and CPU stays below 50%. Also, I have a 8 Gb swap RAM mounted which helps with compiling one of my Docker applications since it needs more than 8 GB of RAM to compile one of the files. Big Grin
Sayan Bhattacharyya,

Heartiest thanks to Post4VPS and Virmach for my wonderful VPS 9!
Pages (2):


Possibly Related Threads…
Thread
Author
Replies
Views
Last Post

person_pin_circle Users browsing this thread: 1 Guest(s)
Sponsors: VirMach - Host4Fun - CubeData - Evolution-Host - HostDare - Hyper Expert - Shadow Hosting - Bladenode - Hostlease - RackNerd - ReadyDedis - Limitless Hosting