104adt

104adt104adt104adt

104adt

104adt104adt104adt
  • Sign In
  • Create Account

  • My Account
  • Signed in as:

  • filler@godaddy.com


  • My Account
  • Sign out

Signed in as:

filler@godaddy.com

  • Home
  • The Journey
  • Case Study
  • Your Reality
  • Workflows
  • Educate
  • Assist
  • Monitoring Details
  • About
  • Contact
  • Blog
  • Knowledge Tests & Surveys

Account


  • My Account
  • Sign out


  • Sign In
  • My Account

Professional IT Services for Your Business

The Journey

I have enjoyed a long career in Information Technology in several industries. Most of my time was spent in healthcare having managed field support and help desk functions and moving on to management roles culminating with the management of all IT functions within a large radiology practice. My passion towards the end of my career and now into the consulting realm is proactive monitoring. For several years I’ve worked with a proactive monitoring company and helped to train their internal staff, resellers, and customers. 


Over the years I’ve come to several conclusions.


* Staff is trained on how to operate when all processes are working as expected. A

workflow that goes from “A” to “B” to “C” is expected, but what do you do when “B” is not 

working?

* Resources deemed critical in one industry may not be considered critical in another 

industry. Email functionality in a legal environment is critical, but not critical in healthcare 

as compared to a critical clinical piece of equipment not on-line. 

* Vendors may offer proactive monitoring for their hardware and software components, but 

typically have no concern, nor can they monitor other vendors’ resources. It is rare that an 

environment relies on only one vendor.

* Proactive monitoring is most often sold into a traditional IT department or sold to a managed service provider. It is difficult to sell the product into an end-user department because sellers a) don’t want to step away from their comfort zone, and b) lack specific industry knowledge. 

* The need for proactive monitoring is often hard to quantify, and is overlooked until there 

is a real problem. Most end-user environments do not track specific downtime nor root 

causes. 


Below are some observations I’ve gained over the years.


Pick your monitoring tool wisely

There are many monitoring solutions on the market. Some solutions are part of a vendor’s overall asset management toolset, while others focus solely on the monitoring component. Which approach is best? Sorry, no easy answer.

* Pricing models differ: There are robust free toolsets, but current support and 

future enhancements should be questioned. Personally, I believe a subscription model based on the number of assets monitored and/or the number of monitored aspects is the fairest approach. Do be careful when you are looking at monitored aspects. A single server could have five, ten, or thirty aspects to monitor. 

* Flexibility is critical. Look at the ease of use when adding assets, or rearranging assets. The ability to clone an asset is critical as it will reduce setup time and maintain consistency.

* Do not expect a toolset to do everything. Evaluate based on the vendor’s time in the industry, industries served, references, and commitment to supporting new standards and protocols. Also look at the vendor’s pricing structure. Is it likely to change?

* Your toolset should be vendor agnostic as much as possible. It should monitor server characteristics regardless of make or model. It should monitor common protocols such as SNMP, WMI, should understand RestAPI, HTTP, SYSLOGS, etc. Do expect your toolset to also have the ability to monitor specific hardware components, as well as specific applications and databases. 

* Clearly understand if your toolset requires an agent to be installed on a target device. Some tools do and their agent tends to have a much deeper ability to monitor aspects of the device and store results. This is both a blessing and a curse. Agents may not be able to be installed on closed or hardened systems and may have some operating system limitations. Some toolsets do not use an agent which also has pros and cons. Installing an agent on some devices may alter functionality, or performance, and could lead to a warranty being voided. It is easy to point a finger at an installed agent as the root cause for a range of issues.

* Avoid 3rdparty code if possible. All toolsets can add in code to handle logic not in the base product using a scripting language such as PowerShell, but keep several factors in mind.

     * 3rd party code needs to be retested with each tool release, each operating system

      release, and each application upgrade release. You should assume your vendor has 

      not tested any of these likely scenarios. 

     * 3rd party code may or may not work correctly. Put a process in place to test 3rd party 

     code in a sterile test environment before moving it to production.

     * Understand what recourse you have if there is a problem. Also understand what to do 

     if the entity who wrote the code is no longer around. 

     ** Bottom line: Your environment may require 3rd party code to address unique 

     hardware or software. Make sure you have a plan.


Pick your model

Vendors offer different hosting models. Each approach has pros and cons.

* Customer hosted: In this model the customer hosts the back-end, heavy lifting compute and storage resources. This approach requires a level of ongoing ownership for hardware, compute power, upgrades, and backups. Internal IT may dictate this approach. Often overlooked are email and SMS integration.

* Vendor hosted: In this model the vendor hosts the back-end heavy lifting which typically implies faster troubleshooting, faster update cycles, and the ability to test beta versions. Make sure…

     * You have the ability to capture data if their cloud is unreachable.

     * Monitoring services are fully functional as compared to a customer’s on-site model. 

     Don’t assume.

* In both models…

     * Understand redundancy of compute and storage resources. Also understand what is 

     capable in terms of redundant WAN connections.

     * Understand how monitoring is handled at different locations within your enterprise. 

     Yes, it is important for some monitoring to be located at your server location, but how is 

     monitoring handled at different buildings, different wiring closets, and even down to the 

     department level. 

     * Understand that internal IT will have to be involved in your approach. There will be 

     firewall implications, network traffic, and the likelihood of standalone data capture 

     monitoring devices. 


Design considerations

All toolsets that I’ve reviewed allow flexibility when it comes to design, and there is no single best way to define your environment. Your design will be a factor of your IT and support staff, and it will change over time. Several thoughts.

* Grouping options: I’ve seen assets grouped by team, by location, by application, and by ownership.

* Environments: Grouping assets by environment is not often considered but is my favorite way to initially define asset hierarchy. Here are some different environments: 

     Core: Assets in this category support all of the other environments and typically include 

     network equipment, WAN and LAN connections, and wireless. Core assets take top      

     priority, are monitored on a more frequent polling schedule, and include a notification 

     audience typically with a 7x24 response requirement. 

     Production: As the name implies, assets in this category support production applications 

     and will include a broader notification audience to include key operational staff and 

     application owners.

     Disaster Recovery: The disaster recovery environment typically implies a different 

     location, different hardware, and may include external resources. DR environments may 

     be a 100% mirror of production or may have less compute and storage which implies      

     much different thresholds. 

     Test, training, development: These environments are not so obvious and often 

     overlooked. They may be at different locations, may be on different hardware, or 

     possibly on the same physical server running as different virtual machines running on 

     different ports. 

* What to monitor: You cannot monitor everything. It would be too expensive and too much overhead. Start small and grow. Some considerations:

     * Identify network equipment, WAN and LAN connections.

     * Identify servers and ping them by IP address or DNS name. 

     * Identify applications and pick a few easy aspects such as expected services. 

     * Identify internal and external URLs.

     * Identify end-user facing hardware: critical workstations and printers. 

     * Identify unique hardware. 

* Learn your audience(s): IT is a given, but don’t stop there. Application owners and managers will be your biggest supporters if they are brought into the design and notification process.

* Understand that design perfection is impossible. Use your toolset to create purpose-built views of assets. As an example, a department scheduler may want a view of assets that show only a single department. A good toolset will let you create visual maps that are easy to read (red,yellow,green) providing only information relevant to the department. 

* Avoid “death by notification”. The creation of notifications is easy, and it is very easy to overwhelm your audience with a) too many notifications, b) notifications that don’t apply to their role, and c) notifications that don’t clearly explain what is wrong or hint towards a resolution course of action. Your audience will quickly become numb to notifications if they are not well thought out. Initially add your design team to ALL notifications to monitor the process.


AI is everywhere

We are just beginning to see where AI algorithms will fit in our world. Dependency on these algorithms will grow, and yes they have monitoring implications. Every algorithm, regardless of industry, and regardless of where located all have the same basic layout. All have inputs. All process inputs, and all produce outputs. IT does not need to totally understand what each algorithm does. IT does need to ensure that inputs, the algorithm itself, and outputs are identified and monitored. Will IT be called into question when AI was not used to help solve a problem? My prediction is yes.


Sit, watch, and listen

Many monitoring touchpoints are obvious, but many are not identified until you put yourself in the end-user’s chair. Take the time to meet with end-users, watch what they do (applications used, peripherals used, external websites accessed) and ask about their pain points. 

     * A check-in person at a doctor’s office may access an insurance company’s website 

     all day long. Is uptime and latency on the site considered an IT problem? IT will say no, 

     and the end-user will say yes. Should the external resource be monitored? In my      

     opinion – absolutely. 

     * A person working the service desk at an auto repair shop constantly looks up part 

     numbers via an external website. Same criteria applies, and yes I would monitor the 

     external site.

     * The same person working at the service desk uses a USB attached bar code scanner 

     all day long. Should your proactive monitoring make sure the agent’s workstation can 

     see the USB device? Once again, I say yes.


Set it, but don’t forget about it

Initial setup to begin monitoring is a painful process and will involve internal staff, external staff, application owners, vendors, end-users, and will go deeper and take longer than ever imagined. History suggests:

     * Identify all assets and add them into your tool. Your initial design may be right or 

     wrong, likely somewhere in the middle. Initially it is more important to identify rather 

     than fine tune. 

     * Don’t initially set any threshold parameters (high or low). Let the tool run, gather 

     history, then go back and set your baseline parameters.

     * Mark your calendar to review all parameters on an annual basis at a bare 

     minimum. Why? 

          * Infrastructure changes. Bandwidth likely will increase, and the number of hops 

          between point “A” and point “B” will likely change. If your bandwidth has improved, 

          then latency monitoring should adjust. If additional hops are found then the 

          implication is more touchpoints to address.

          * Resources move and may not be so obvious. A server moving from on-prem to a 

          colocation could still maintain the same IP address. Once again thresholds may 

          need to be adjusted, and new hops come into play.


Use your monitoring system as a real-time source of documentation

A well-constructed monitoring system is an invaluable tool when it comes to identifying the entire environment. All assets should be defined whether they are owned, leased, or belong to a vendor. Why?

     * Staffing turnover: Staff will come and go. Duties may be handled internally or 

     externally and can change over time. Use your tool as a source of knowledge.

     * Fires and floods: Natural events do happen, as well as self-inflicted issues such as 

     electrical issues, and sabotage. Use your tool to help identify and rebuild.


Medical Imaging Insights

I have a few insights, having worked in the medical imaging sector for many years.

* The audience to receive notifications is not typically IT. More likely application specialists should receive initial notifications so they can use their expertise to determine if an issue is related to clinical hardware or clinical software. Application experts can better determine the priority that needs to be assigned to resolution.

* It is likely that a change in IT infrastructure will trigger a problem. Replacing network components and/or rewiring are likely causes. If there is a formal change management process, then engage so notifications can be adjusted during known maintenance windows. 

* Clinical equipment will need preventative maintenance either done by the vendor or internally by IT or a biomed department. Planned downtime will trigger unwanted notifications unless notifications are paused. Make sure your monitoring team is engaged otherwise expect severe cases of “death by notification”.

* Clinical applications may be located on-site, at a colocation facility, at a vendor’s hosting facility, or a 3rd party hosting location. Applications change and move so keeping your monitoring updated will be critical. 

* A medical imaging environment is complex, involves many vendors, and many touchpoints inside and outside your organization. Starting small and then expand. Below is a possible approach.

     * Identify modalities, then reading workstations. These two components are high 

     visibility and patient/Radiologist facing. You will gain acceptance by making these user 

     groups happy.

     * Identify, map, and monitor all DICOM and HL7 internal feeds. This is far from simple 

     and will help identify applications such as RIS, PACS, EA, VNA, VR, and AI.

     * Next add in all the ways DICOM and HL7 enter and exit your network.

     * Next address all database and application monitoring.

     ** This is not comprehensive, but a great start.


Enjoy your proactive monitoring journey.

Copyright © 2025 104adt.com - All Rights Reserved.

  • Contact
  • Secure Content

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept