APC UPS Data Center & Enterprise Solutions Forum
Schneider, APC support forum to share knowledge about installation and configuration for Data Center and Business Power UPSs, Accessories, Software, Services.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:35 AM
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:35 AM
Hello Community,
Currently using PCNS 4.0 to protect a vSphere 6 cluster - installed as the vMA vAPP provided by APC. I would like to upgrade to the newest version of PCNS 4.1.0 vMA vAPP without having to re-install the platform or re-invent the wheel.
Is there an in-place or backup/restore type of upgrade method that has been identified by the developers?
Has anyone attempted to perform any type of upgrade migration between the 2 versions? (In-place; Backup/Restore; Other.)
Given that all of this is built on UX type of OS, I know that everything configuration is housed in individual files. With the possible exception of some private encryption keys, I would feel pretty safe assuming that for the most part the configuration files between the 2 versions are going to be very close to exactly the same. With the possibility of having to add a few lines to a configuration file to provide for new features, it wouldn't be a stretch to believe that for the most part the configuration can be copied between versions.
If someone has a complete list of configuration files that are referenced by PCNS as installed to a vMA instance, I wouldn't mind doing the work to create a differential list of configuration nodes in the various files between 4.0 and 4.1 - especially if it means that I can install the latest PCNS vAPP, stop the service(s) and simply apply the backed-up configuration files from 4.0 (including edits if needed) to the 4.1 instance and have it fire up and go.
Again, the only thing I can imagine being a bit of trouble would be any kind of private keys for traffic encryption/certificates, if applicable. Even those may be able to be exported/migrated.
If anyone has any experience with this - good, bad or otherwise - please advise,
Thank you,
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:34 AM
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:34 AM
FYI:
All is well for the most part - both items were updated successfully.
I've been having a look at the new feature 'VM Prioritization' and have set up our VMs in groups as per the GUI. It seems however, that every time I mess with that page, the PCNS tells our vCenter that the UPS is reporting 'Runtime Exceeded' and it begins to migrate VMs and put our pHosts into Maintenance Mode. No actual VM/Host shutdowns yet, but there may be some logic imposed by configuring items on that priority page that is causing PCNS to think that the 1 HR 48 Minutes of run-time that the UPS is 'actually' reporting is too short. I've set the shutdown timing on the priority page back down to 30 seconds - which is much lower than the 120 seconds on the VM Shutdown page.
Can you tell me, the 'VM Shutdown Duration' on the VM Prioritization page - is the duration configured by those settings ('Low', 'Med', 'High') supposed to be per VM or per group? The issue may be that the Duration * number of VMs = longer than the UPS' reported run-time and be causing. So when I configure a few minutes for each priority-group and that is multiplied by the ~40 VMs the estimated time is suddenly over the UPS run-time.
I had originally assumed it may be per group and would shut-down the VMs in each group 'Low' --> 'High' en-mass the same as it migrates them. I guess it is not applied 'per group'...
(...I've been fighting with this for the last hour while I've been trying to write this up...)
Now it seems that PCNS has decided to start putting my pHosts into maintenance mode at random, even after reverting (disabling) the VM Prioroty setting - I cannot have PCNS deciding to move my VMs around at will, this is now an issue...
Is there any chance that once PCNS has decided (correctly or otherwise) to begin shutdown procedures, that it will continue with said procedure even if a mis-configuration is changed back to be within the parameters of the actual run-time of the UPS? ie. For a shutdown procedure - is there some kind of a service ticket that may be created (a file or in a file) on the vMA appliance running PCNS that is being continuously executed until completion (which may explain why PCNS is continuing to put my hosts into maintenance mode even after reverting settings back to pre-upgrade - what should be a stable state) ?
I need to understand this (execution of a shutdown process) a little better to be able to understand why PCNS has decided seemingly on its own to begin messing with our vSphere cluster environment.
(Another hour passes...)
I guess what ended up happening was that the basic shutdown settings (never mind the prioritization) was set too high and when calculated per VM the ~2 hours of run-time on the UPS was not sufficient based on the math being performed to calculate [Shutdown time per vm * number of VMs] and so was starting migration/shutdown processes even though we had stable power.
I guess that PCNS 4 may have had some kind of other bug that prevented it from recalculating the required shutdown time. Given that we recently added a bunch of new VMs (essentially doubled the size of our virtual infrastructure) and that didn't cause any issues with PCNS until I actually logged into the thing and started reviewing the settings - which it just so happens was after the upgrade to 4.1.0. Once i tickled the config again, it must have recalculated the required shutdown time based on the newly discovered VMs and decided that 2 hrs was not long enough; so I had to go and re-configure the basic shutdown time. I guess I'll re-attempt the prioritization configuration then and see where that leaves me...
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:35 AM
Hi,
Please review Kbase FA276378 "How do I upgrade the PowerChute Network Shutdown (PCNS) Virtual Appliance"
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:35 AM
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:35 AM
Bill,
Thank you for the info. I will attempt to address this first thing tomorrow morning.
(Just an FYI: Searching APC / Google didn't provide me with this result, perhaps that article may need to be indexed/keyword-ed.)
In conjunction with that, since it doesn't appear that I'll be able to upgrade the vMA platform (VMWare says you can't upgrade a vMA version prior to 5 - to 5, you have to reinstall) - how do you recommend that I update the VMWare tools on that vAPP?
I know that with some flavors of Linux there is an open source vm-guest-tools package that seems to work with vSphere and available via the standard app-get/aptitude/whatever app package manager - or - mount and run the VMWare tools install available from the host.
Based on what I see in vCenter for the PCNS vAPP, I have a feeling that it is the native VMWare Tools version as it currently lists the tools status as: [(!) 'Out-of-date'] as opposed to: [(?) 'Guest Managed']. For another of my Linux Guest VMs that use the open-source vm-guest-tools package the status is listed as: [(?) 'Guest Managed'].
TYVMIA!
Link copied. Please paste this link to share this article on your social media post.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:35 AM
Hi,
WE have Kbase FA176573 that discusses how to update VMware tools on the PowerChute Appliance.
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:35 AM
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:35 AM
You da man...
Thank you,
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:34 AM
Link copied. Please paste this link to share this article on your social media post.
Posted: 2021-06-29 01:50 AM . Last Modified: 2024-03-13 04:34 AM
FYI:
All is well for the most part - both items were updated successfully.
I've been having a look at the new feature 'VM Prioritization' and have set up our VMs in groups as per the GUI. It seems however, that every time I mess with that page, the PCNS tells our vCenter that the UPS is reporting 'Runtime Exceeded' and it begins to migrate VMs and put our pHosts into Maintenance Mode. No actual VM/Host shutdowns yet, but there may be some logic imposed by configuring items on that priority page that is causing PCNS to think that the 1 HR 48 Minutes of run-time that the UPS is 'actually' reporting is too short. I've set the shutdown timing on the priority page back down to 30 seconds - which is much lower than the 120 seconds on the VM Shutdown page.
Can you tell me, the 'VM Shutdown Duration' on the VM Prioritization page - is the duration configured by those settings ('Low', 'Med', 'High') supposed to be per VM or per group? The issue may be that the Duration * number of VMs = longer than the UPS' reported run-time and be causing. So when I configure a few minutes for each priority-group and that is multiplied by the ~40 VMs the estimated time is suddenly over the UPS run-time.
I had originally assumed it may be per group and would shut-down the VMs in each group 'Low' --> 'High' en-mass the same as it migrates them. I guess it is not applied 'per group'...
(...I've been fighting with this for the last hour while I've been trying to write this up...)
Now it seems that PCNS has decided to start putting my pHosts into maintenance mode at random, even after reverting (disabling) the VM Prioroty setting - I cannot have PCNS deciding to move my VMs around at will, this is now an issue...
Is there any chance that once PCNS has decided (correctly or otherwise) to begin shutdown procedures, that it will continue with said procedure even if a mis-configuration is changed back to be within the parameters of the actual run-time of the UPS? ie. For a shutdown procedure - is there some kind of a service ticket that may be created (a file or in a file) on the vMA appliance running PCNS that is being continuously executed until completion (which may explain why PCNS is continuing to put my hosts into maintenance mode even after reverting settings back to pre-upgrade - what should be a stable state) ?
I need to understand this (execution of a shutdown process) a little better to be able to understand why PCNS has decided seemingly on its own to begin messing with our vSphere cluster environment.
(Another hour passes...)
I guess what ended up happening was that the basic shutdown settings (never mind the prioritization) was set too high and when calculated per VM the ~2 hours of run-time on the UPS was not sufficient based on the math being performed to calculate [Shutdown time per vm * number of VMs] and so was starting migration/shutdown processes even though we had stable power.
I guess that PCNS 4 may have had some kind of other bug that prevented it from recalculating the required shutdown time. Given that we recently added a bunch of new VMs (essentially doubled the size of our virtual infrastructure) and that didn't cause any issues with PCNS until I actually logged into the thing and started reviewing the settings - which it just so happens was after the upgrade to 4.1.0. Once i tickled the config again, it must have recalculated the required shutdown time based on the newly discovered VMs and decided that 2 hrs was not long enough; so I had to go and re-configure the basic shutdown time. I guess I'll re-attempt the prioritization configuration then and see where that leaves me...
Link copied. Please paste this link to share this article on your social media post.
Create your free account or log in to subscribe to the board - and gain access to more than 10,000+ support articles along with insights from experts and peers.