Hi Innovaphone,
we are currently hosting 165 IPVAs (mostly V12r2) on VMware/ESXi 6.7. That platform will be soon EOL and also doesn't integrate well in our standard
virtualization environment. We are therefore looking to move these IPVAs to a
RHEL-based Qemu/KVM environment. We had to workaround some issues, but for now found a solution which seems to be stable and running well. However, since KVM is currently not actively supported by Innovaphone, we want to make sure, that we can run the IPVAs in the future too. So maybe you could share some internals and maybe plans for newer releases with us.
In a first step we used our standard emulation profile which provided the four
diskimages via the standard IDE blockdriver and a PCnet-networkcard for the
ethernet connection. This seems to run the IPVA in the same mode like VMware does. However, the IPVA uses the Intel perfcounter-feature for its internal timekeeping, which seems to be heavily implementation dependant (and also varying within the used CPU archicture). While running on old Intel E3-1200v5 series the internal IPVA-clock was running way too fast (something between 20-40%) which totally broke any announcement messages. Running on a modern AMD Ryzen the clock was running too slow (not as bad, around maybe 5%, but still not really useable).
Doing some googling it seems, that perfcounters are implemented differently
between differnet CPU revisions. So I would guess that the IPVA uses some
fixed frequency setting for a certain CPU it detects? Maybe that is something
that can be adjusted by some configuration settings?
Since Innovaphone also supports HyperV as a platform we then added emulation for the Microsoft-implementation "hypervclock". This totally fixed any internal clock issues we had, now giving us a stable and useable clock-source and working announcements. However, it seems that the IPVA switches to different mode when detecting HyperV and no longer accepts PCnet as a
network-adapter.
We therefore added the old legacy Tulip-networkcard to the emulation (which
had to be provided with the PCI id 0x0009, instead of the usual 0x0019 for the
DEC21143). It also had a problem with the endianess, using the macaddress in
reverse, which was fixable by changing the endianorder with the Tulip-driver.
In this mode everything is running fine.
Even under lots of load and running over several days in production, we
couldn't make out any issues. Also live migrations work without any
interruptions to running calls. The performance is also a lot better in
respect to VMware (my guess would be, that the perfcounter slows down the
emulation a lot, since it needs an additional context-switch when hitting the
RDPMC-instruction, which is just not necessary when using to HyperV-clock).
Could you share with us, what kind of hardware the IPVA supports and expects
for the best results? And also on which features exactly the IPVA switches
modes between VMware and HyperV?
Also when it comes to this:
- is it possible for some mixed hardware operation, like using HyperV-clock for its timesource, but using PCnet for the network-card?
- is it possible to support some more "common" timesources, like PIT, TSC or HPET?
- and if not, is it possible to adjust the frequency in which the IPVA expects the timeticks while running with perfcounters?
- any other network-adapters which can be used regardless of the operation mode (like ne2k, e1000 or rtl8139)?
- are there any plans to deprecate some of the current feature sets in the
near future? And especially can we expect to run IPVA with HyperV-clock plus Tulip-network in futher releases? - and lastly are there plans to switch to some more "common" hardware in
future releases (like SCSI-blockdevices and more "modern" x86-architecture features which would improve the performance even more)? I guess asking for support of some more special virtualization devices (like Virtio) is a little bit too much?
I know that KVM is currently not supported, but if we could at least know if
the current feature set will at least not go away, we would have a better
feeling when deprecating our old ESXi-platform. So far both V12r2 and V13r2
seem to run flawlessly in our setup.
Best regards
Thomas
(moved to English discussion forum by Christoph Künkel (innovaphone) - original submission Sunday, 22 May 2022, 05:28 PM)
(Edited by Sebastian Hayer-Lutz (innovaphone) - original submission Wednesday, 25 May 2022, 07:10 PM)