Habitat Application Manager (Old Version)
Posted: Fri Oct 01, 2021 4:22 pm
The new Habitat Application Manager is a deployment and update manager for third party applications running on Windows agents inside ConnectWise Automate. This allows the MSP the ability to install application if needed but better yet Habitat auto detects what applications maybe installed on the end agent and matches up packages in it's approved applications list with installed software to update and manage the application moving forward.
Here is how it works:
The Habitat Application Manager uses Chocolatey.org (NuGet) framework to assist in the management of available application versions and their installation. Habitat does this by deploying the chocolatey framework to each agent enabled for management, assigns a single agent at each location the job of caching all packages approved on the client locations cache drive. The rest of the agents are instructed to receive their updates via this cache drive providing a huge savings in bandwidth and insuring that service limits are not exceeded.
***Note***
For application installs and updates to be distributed a caching drive must be setup and available at each client location.
As updates are reported to Habitat and Schedules permit, updates of the cache are made then agents are instructed to update their applications.
Most of the functions in the Application Manager are script based which means that although the command to do something was given it may take several minutes or more to actually complete the task. You can monitor the status of any of the functions in the script logs inside the agent console for the agent in question.
Application Manager:
The application Manager is accessible from the main Habitat Console and allows you to configure the application searches and set repository packages definitions. Once you have a package selected you can see in the right pane the number of applications names that match your search or toggle over to see the number of agents that will be affected by the software search.
Turtle and Rabbit
There is a master speed setting that controls how often in a day the services for the plugin run. This is depicted as a rabbit or a turtle in the console. In turtle mode application caching and agent syncs are preformed only once a "scheduled" day where in rabbit mode they are preformed several times over the same scheduled day.
A scheduled day by default is everyday or daily. However you can control this by changing the schedule in each of the client consoles when enabling a client for management.
There is a delay setting what controls the staggering of agent script schedules so not all agents try updates or version checks at the same time. You can stagger script schedules by (X) minutes for each agent from the agent before it.
Client Console:
The client console is where you enable the clients for management by clicking the servers and /or workstation check boxes. You can set a schedule for each client independently of each client. Clients can run daily, weekly or monthly. You can pick the day of the week or the day of the month that you would like automation scripts run on the online agents.
Each agent has several functions available to them by selecting then from the agent list and right clicking them to expose the agents action menu.
Action Menu:
By default the schedule is daily which gets a client going typically in just a few hours depending on what agents are online and such. Most clients will be up and running in 24 hours so allow a little tile for automation to sort out a newly enabled client. Once you have the client running and acceptable data in the plugin you can back the automation down to a weekly or monthly task. Set your delay (in seconds) for 2 or 3 to stagger the agents just a little and you should be off to the races.
To Cache or not to Cache, Why do we Cache!
Long story short is that over the years Chocolatey.org has started to throttle and block the public repositories from high volume usage. So when they see multiple Chocolatey agents trying to make requests to the public repo they start to throttle you and if it continues over time they start blocking the locations IP address for several hours at a time. This causes havoc with trying to keep locations with large networks up to date. So by default we suggest using caching and we made it as easy as possible to setup and execute.
At the heart of the system is the ability to use cache drive locations to feed the mass of agents while having just one agent actually stepping out on to the internet to manage the cache at each location. This saves time and energy it would otherwise take to allow each agent at the location to download the same packages repeatedly. This also prevents exceeding the community repository limits and causing agents to skip installs and updates.
To configure cache if it is not already done so for a location is to provide a easily accessible network share on the local client network or through a VPN where roughly 1 .5 GB of application cache can be stored. This storage should support most every need for cache management. You can use NAS drives, Domain shares and workgroup shares, any SMB accessible storage should do. You must supply a valid username and password for the share for the backend to work correctly.
If you are using VPN enabled networks at any location that shares storage with another location then you only need one caching agent for the storage location. You do not need to set a caching agent at each location but only set a caching agent at any one location to manage the cache. The cache share and caching agent do not need to be on same network segment as long as caching agent can see caching share on network.
Stand alone agents ignore the cache process and go direct to public repo. Be careful in how you use these, if you mix them in with caching agents on the same network location then they may cause the throttling of your caching agent.
Here is how it works:
The Habitat Application Manager uses Chocolatey.org (NuGet) framework to assist in the management of available application versions and their installation. Habitat does this by deploying the chocolatey framework to each agent enabled for management, assigns a single agent at each location the job of caching all packages approved on the client locations cache drive. The rest of the agents are instructed to receive their updates via this cache drive providing a huge savings in bandwidth and insuring that service limits are not exceeded.
***Note***
For application installs and updates to be distributed a caching drive must be setup and available at each client location.
As updates are reported to Habitat and Schedules permit, updates of the cache are made then agents are instructed to update their applications.
Most of the functions in the Application Manager are script based which means that although the command to do something was given it may take several minutes or more to actually complete the task. You can monitor the status of any of the functions in the script logs inside the agent console for the agent in question.
Application Manager:
The application Manager is accessible from the main Habitat Console and allows you to configure the application searches and set repository packages definitions. Once you have a package selected you can see in the right pane the number of applications names that match your search or toggle over to see the number of agents that will be affected by the software search.
Turtle and Rabbit
There is a master speed setting that controls how often in a day the services for the plugin run. This is depicted as a rabbit or a turtle in the console. In turtle mode application caching and agent syncs are preformed only once a "scheduled" day where in rabbit mode they are preformed several times over the same scheduled day.
A scheduled day by default is everyday or daily. However you can control this by changing the schedule in each of the client consoles when enabling a client for management.
There is a delay setting what controls the staggering of agent script schedules so not all agents try updates or version checks at the same time. You can stagger script schedules by (X) minutes for each agent from the agent before it.
Client Console:
The client console is where you enable the clients for management by clicking the servers and /or workstation check boxes. You can set a schedule for each client independently of each client. Clients can run daily, weekly or monthly. You can pick the day of the week or the day of the month that you would like automation scripts run on the online agents.
Each agent has several functions available to them by selecting then from the agent list and right clicking them to expose the agents action menu.
Action Menu:
- Open Agent Console - Both open agent console and location console launch Automates consoles.
- Open Location Console
- Deploy Framework - Allows you to manually deploy the framework to an agent (optional)
- Set As Caching Agent - Sets the agent as the responsible agent.
- Set As Stand Alone Agent - Sets the agent as the direct to repo access agent.
- Set As Normal Agent - Sets the agent as standard agent that access all packages from cache.
- Update Now - Schedule update script now
- Run Version Scan - Schedule versioning check for now
- Enable/Disable Auto updates
- Update Repo Cache - Scheduled repository update now
- Install Software From Repo Cache - Schedules install of cached software
- Install Software From Community Repo - Installs any package directly from chocolatey repository
By default the schedule is daily which gets a client going typically in just a few hours depending on what agents are online and such. Most clients will be up and running in 24 hours so allow a little tile for automation to sort out a newly enabled client. Once you have the client running and acceptable data in the plugin you can back the automation down to a weekly or monthly task. Set your delay (in seconds) for 2 or 3 to stagger the agents just a little and you should be off to the races.
To Cache or not to Cache, Why do we Cache!
Long story short is that over the years Chocolatey.org has started to throttle and block the public repositories from high volume usage. So when they see multiple Chocolatey agents trying to make requests to the public repo they start to throttle you and if it continues over time they start blocking the locations IP address for several hours at a time. This causes havoc with trying to keep locations with large networks up to date. So by default we suggest using caching and we made it as easy as possible to setup and execute.
At the heart of the system is the ability to use cache drive locations to feed the mass of agents while having just one agent actually stepping out on to the internet to manage the cache at each location. This saves time and energy it would otherwise take to allow each agent at the location to download the same packages repeatedly. This also prevents exceeding the community repository limits and causing agents to skip installs and updates.
To configure cache if it is not already done so for a location is to provide a easily accessible network share on the local client network or through a VPN where roughly 1 .5 GB of application cache can be stored. This storage should support most every need for cache management. You can use NAS drives, Domain shares and workgroup shares, any SMB accessible storage should do. You must supply a valid username and password for the share for the backend to work correctly.
If you are using VPN enabled networks at any location that shares storage with another location then you only need one caching agent for the storage location. You do not need to set a caching agent at each location but only set a caching agent at any one location to manage the cache. The cache share and caching agent do not need to be on same network segment as long as caching agent can see caching share on network.
Stand alone agents ignore the cache process and go direct to public repo. Be careful in how you use these, if you mix them in with caching agents on the same network location then they may cause the throttling of your caching agent.