How to Monitor, Alert, Diagnose, and Report SQL Server Performance with

IDERA SQL Diagnostic Manager

00:08

Introduction to Performance Monitoring with SQL Diagnostic Manager

Welcome everyone to the webcast today, we’re going to be looking at our sequel diagnostic manager product, as you can see before we get things started. I just want touch on a quick admin note. We have a lot of people joining today, so to make sure we don’t have any background noise, we have everyone in listen only mode. However, if you have any questions, you can post them in the little question panel and the go to meeting pain there on the right hand side. Also, I’m going touch on a couple of slides here, but you can actually download my slides from the little handout section. If you want to take a copy of that as well. I, I don’t plan on spending a lot of time on slides. I have just a couple of slides and then we’ll jump into the actual presentation with the demonstration of the product. And also the product we’re talking about. SQL diagnostic manager is a tool that you can download from the idea and website. So that’s www.idera.com. If you go to the products dropdown section, you’ll find sequel diagnostic manager for SQL server, and there is a little button to start your trial. With w with a click of that button, you enter of information about yourself, and you’re immediately presented with the download of the software and the software comes with the trial license key already embedded in it. You can basically hit the ground running from the moment you install the software. Okay. Without further ado, I’ll go ahead and get things going here. Like I said, I only have a couple of slides before we get into the actual presentation. Again, if you have any questions, just post them in a little question panel, I’ll get to them toward the end of the presentation today.

Overview of SQL Diagnostic Manager

I wanted touch on the architecture of the product, and probably before we get into the architecture, it’d be good to kind of give a introduction to what this product is all about for any of you that are, completely new to diagnostic manager. Granted, this product has been around now for over two decades. A lot of the SQL server DBA community is aware of this product. We’ve been sponsors at different events, of course, sequel tasks. We were, always, in attendance to that event. But, there may still be some of you that are not familiar with diagnostic measure. It is our flagship product for performance monitoring of SQL server. So, what I state that really differentiates this product from other tools that you may have used in the past is that it is more than just a symptom-based monitoring tool, because if you’ve dealt with any other monitoring tools, obviously they will have the little green, yellow, red status to give you an idea that, Hey, we have a problem over here with CPU or memory bottlenecks or this performance, or, perhaps even with a query performance or wait stats, all of that. A lot of these tools don’t really give you enough information to help answer why those problems are happening. That’s really what diagnostic manager as the name implies. It’s all about giving you deep dive diagnostics into some of those performance issues, and really we’ve evolved the tool now to even take it a step further, to give you even more information on how to fix the issues through the prescriptive analysis feature. That feature basically it’ll scan an instance and tell you everything that it identifies as a potential issue from server configuration to database configuration, to resource utilization, to queries that are running poorly, wait stats, all of that blocking dead locking. It’ll give you a comprehensive report on all the problems that discovered. That was kind of a long-winded introduction to what diagnostic manager is all about. He installed the product, though, you’re going to have a very simple install process.

Installing SQL Diagnostic Manager

It takes about five to 10 minutes to install it as a three tier architecture, as far as the way data flows through the product, you have the collection service that reaches out and captures the data from the remote sequel instances. There is nothing that gets installed on the actual sequel instance. Okay? There’s no agents, for example, there’s no extraneous store procedures. We don’t create any extra databases on the instance. We try to stay away from putting anything on your actual production instance.

SQL DM Collection Service

The collection service is really the worker bee and it’s centrally located and it reaches out and gathers performance data from various dynamic management views. This per info, exact sessions, this processes for the session, snapshots, extended events for query collection or query store, you have different options for how you can capture data depending on the version of SQL server index. The physical stats for index physical stats for fragmentation, of course, we monitor that as well.

SQL DM Management Service

That’s been in the product for quite some time, but the collection service will gather that data, hand it off to the management service. The management service also centrally located is where your alert actions take place. That is one of the benefits of this tool is that whenever a problem pops up, you can have it perform different types of actions in response to those symptoms like email you or off scripts or jobs or PowerShell commands, things like that, or integrate some of the alert information with third party systems.

Client, Web, and Mobile Consoles

The management service also writes data to the back end repository, and that is a SQL database. There’s also the.net client. That’s the primary way that we work with the product and configure the tool. There is an optional web interface and a mobile interface as well. If you want to take advantage of it today, you’re going to see me primarily stick with this.net client though, because that is where you get the most information from the tool. That’s where we typically to analyze what’s going on and see what’s going on real time with all the activity and get the most information from the tool.

Licensing and Team Access

All right. The product is licensed based on the number of sequel instances that you decide to monitor and not based on the number of users of the product. If you do have other team members that you want to share this data with, you can give them access. With the security model that we have built into this product, you can secure the tool so that certain team members perhaps have read only access. Other team members have the ability to configure thresholds. Others might have more of a network operation center type role where they go in and clear out alerts. There is the administrator role where you can actually manage the tool and interact with remote instances, like killing a blocking process and things like that.

SQL Diagnostic Manager Demonstration

All right. Let’s go ahead and just jump into the product. I know you’re here for the demonstration of the tool, so I’ll go ahead and expand out my little environment here.

Navigating the Management Console

And this is the diagnostic management console. We recently gave it a little facelift here and improve the navigation to have more of a hierarchy type view. There’s still the older navigation on the left hand side. What I call the outlook style navigation, where we select an object that draws your attention. You’re able to view the relevant data for that object on the right hand side. We’re still at this point kind of on the high level symptom-based view, I introduced the product and, indicating that it’s something that will give you the diagnostic drill down capability, but there’s levels to that. It starts off with the symptom level. We drill into something that draws our attention. We get down into the diagnostic level and we’ll get down deeper into some of the findings and recommendations and things like that. Right now we’re looking at the green, yellow, red status across my enterprise. My enterprise is small, but this is a scalable solution. Whether you’re looking to monitor 50 or 150 or 350 instances, it will grow and scale with your environment over time. All right. On the left-hand side, we have a couple of little window panes here. There’s the, my views at the very top that comes with these little containers already populated in that window. Basically the idea here is that as instances change status, based on the different events that pop up on the instance, those instances will move between these containers. First thing in the morning, and you want to see your most critical instances, you can jump right to that container for critical, and it’s going to display all of your instances that are most critical in your enterprise. If you want, then you can click on any of the instances and start that drill down process. So that’s what you want to do. There’s also, as you can see a status for warning or instances that are in, okay, status maintenance mode, that is something nice about the tool. You can actually put instances in maintenance mode where they do not send out alerts during certain time windows. If you’re planning to do reboots or upgrades and things like that, you may want to put a sequence in some maintenance mode. So it doesn’t send out a word.

Tags

The second window here is called tags and tags is just another name for groups. You might ask, well, why don’t you just call it groups? Well, tags. The label tags is in a sense it’s like labeling something, it’s putting a tag or a label on that. Something being a single instance to label a based on different attributes. It could be based on SQL version edition, perhaps allocated resources, hosted applications, perhaps DVA owners, customers. However you want to create these tags. By doing that, you’re able to create these multiple tag groups so that if someone reports a problem, you don’t have to weed through 150 instances to find one or two instances relevant to that problem. You can jump right to where you need to be very quickly by selecting the tag from the list. You can see I’ve created a few little tags that would be, perhaps popular names in any environment like development, production, I’ve SharePoint. I also have my payroll application. I like to pick on my payroll application, the idea being that maybe we have a couple of instances that are feeding data to this application. If there’s a problem, I’m going to get alerts around it. For sure. I’ll get that email notification instructing me that, Hey, there’s a problem that we might need to deal with here. I have this, a red indicator on the payroll application. Also when I come into the console, the email that I received by the way is going to have a lot of good information. However, I decide to customize my alert message, where it automatically and dynamically embeds information into that email. I already know a lot of information before I even opened the console, but once I opened the console, I see there is a red X on payroll application. Let’s just go ahead and select that. It brings me to another view of a couple of instances. Like I said, I have two instances that feed data to the payroll application. One of them customer SQL is perhaps hosting a salary database because after all this is the payroll application, I did a product. VM is perhaps hosting a personnel record database, but they both come together and share data with this payroll application. And, I’m just using a hypothetical there. My point is we can see here with those two instances, if there are any particular issues popping up a customer SQL has a green check mark, and everything seems to be okay there Idera prod VM. We have a red X and a quick mouse over. It shows a little, pop-up giving me just a high-level summary of some of the top issues that exist. I do have some blocking popping up there, a database in emergency mode. This is probably where I need to start journaling in. I’ll go ahead and click on that instance. It takes me to the dashboard view. The dashboard view starts off, displaying when you first come into the console, after you’ve installed, it’s displaying some of the common performance indicators that DBA is like to look at like CPU utilization, memory performance network activity, and it actually provides visibility.

Cloud Virtual Machine Monitoring

This one provides Vince belief, even outside the operating system at the VM level, like host to the VM, to the operating system, to SQL server. Also this tool does support Amazon and Azure monitoring as well. If you’re kind of moving that direction, you can feel comfortable knowing that this tool will give you those benefits of monitoring the cloud as well. Right out of the box, you’ll have the common performance metrics sessions blocking dead locking in the sessions, category cache server weights. You can also, as you can see, there’s a little button here to customize, and you can even add your own metrics to this dashboard, including your own custom counters, perhaps even your own custom query, if that’s what you want to plug into the tool. Really the sky’s the limit with this tool. You can mold it around your own environmental needs for your own queries that you want to plug in or additional perf Mon or WMI counters or hyper V or VMware metrics, additional Azure metrics, if you want. You can put those charts on this dashboard if you wanted to. All right, so I can see the charts change if there’s anything happening with alerts on any of the data on this dashboard view, and then select one of the charts to drill into it. I’m going to drill into the session activity here in just a minute. Before I do that, I wanted to just show you where you go to configure those alert actions. Because after this presentation, you might want to download the product and try it out in your own environment. Y

Server Alerts

ou might want to know where do I go to set up my alert actions? We do that by going up here to the top, right section of the console and your go to meeting pain might be in the way, so you might need to minimize it, but there’s a little wrench icon here. Notice as I mouse over, has the tools pop up. It’s the tools menu. If I click that, I’ll go down to alert actions and responses, and it brings up this little window. These are the various action providers that are available for alert actions in response to threshold conditions being met. So, a lot of these I’ve already named off email right into the application event, log, enabling query data collection. I collect query data all the time, but if we’re talking about like a dev instance, maybe only want to turn on query data collection when a problem happens, run it for five minutes and then turn it off, for example. You have that option or senescent and P traps down here at the bottom. You can kick off your own SQL scripts to respond to certain problems, kick off SQL jobs. We played friendly with system center, operations manager, otherwise known as scum. We can actually dynamically update alerts and events to that system. Of course, kick off third-party programs and PowerShell commands and things like that. This is lots of where you configure these actual providers, for example, by double-click on this SMTP provider notice, that’s where I put in my SMTP relay IP address, or fully qualified domain name and put in my login details and the sender information. You can even test it to send yourself an email just to make sure you have good connectivity. All right. The learn actions and responses are based on certain rules that you create very similar to outlook automation. That’s the example I like to use. It’s like outlook automation where you define certain conditions. I basically opened up one of my rules here, but you give it a friendly name. You define conditions that you’re looking for to trigger the alert actions.

Responding to Alerts

The second windows, when we select the actions from that alert action, provide a list to kick off in response to those conditions. The third window at the bottom is where we fine tune the rule around what we’ve selected in the top two windows. The first window, the conditions can be based on alert conditions happening on certain instances or tag groupings of instances, specific metrics or multiple metrics, having a threshold condition at the same time, we’re metrics severity changes like when it goes to warning semi an alert or notification. If it goes to critical, send me another email notification or kick off an alert action. However you want to configure that, but it’ll only do it once and it will only do it when the severity changes and we’re refreshed. Time happens within a specific time window, or we can put alert, ranking around thresholds as well, where we score certain thresholds, certain metrics and their thresholds more elevated are more important than others if you want. We’re metric severity is unchanged for specific time period. That’s more useful, I guess, for escalation purposes, for example. All right. The actions down here, I put a couple of examples here, like to kick off a script. This is my own scripts, basically, because for blocking wait time, I might want to kill a blocking condition, but I don’t want to just as a blanket statement, kill all blocking. If I have had this, pesky blocking condition, that’s always popping up and I really want to narrow down what I do around that particular condition. I might want to put it in my program database and log in to make sure I’m only targeting that you can get really creative with this. This was my own little creative script, but basically what I did is I take from these more options and you’ll find with all the alert actions, you have this little dropdown and you can take information from the alert details and embed it in your own programs or PowerShell commands or sequel scripts in this case. So it’s really customizable. What I did here is I took the alert summary. You can see it right here. I put that little parameter right into my script. What I’m doing in essence is taking the text that would go along with the alert summary and pasting it into my own command. Basically that parse out the session ID, which is part of the alert summary, validate the program log-in database, and then killed the session that meets the details that I’ve, embedded in my parameters up above. Again, that’s my little creativity, but there’s a lot of different alert action options. Here’s the email. Again, you have those little dropdown options. You can customize your email message to yourself or to your team, the subject line and message body. You get the email, you have the right information you need to see right away.

Prescriptive Analysis

There’s also this prescriptive analyzer. This is what I was talking about earlier. When I talked about the evolution of diagnostic manager and how it is more than just a symptom based monitoring, it’s evolved to being a diagnostic tool, but also now it provides recommendations and even fix it scripts in some cases. The way it works is we have to use this prescriptive analysis feature that as you can see, can be used as alert action, or you can run it manually, or you can schedule it’s run. You have different options there, but as alert action, you select what categories you want it to analyze. And these are all the different categories. If you wanted to narrow it down to just one particular section, you can go in and be very selective of what you want to tantalize. I have everything here selected so that I get a fully detailed report. So let’s go and cancel this out. 19:37 We’ll say that the alert action at this point with the blocking condition in the back and kicked off. We would have a report in the prescriptive analysis section. The prescriptive analysis is actually not the very end of the little tab navigation here, as you can see, we have that hierarchy I talked about earlier. Under the servers, I have overview sessions, queries, resources, databases, services, logs in analyze. We labeled those for what kinds of data you’re going to see below each one of those tabs, prescriptive analytics is going to be under this analyzed section. We go to that section and I have a couple of reports that have been created, and I can just take the most recent report, click on it. It brings up this report showing me a heat priority. Ranking of everything had discovered as a potential issue. Some of these issues are more FYI like integrity check hasn’t been performed in X number of days. For example, some of them are more focused on performance here. We have a large table with greater than or equal to 1 million rows that it’s had a high number of modifications since the last time statistics were updated. You have all the details of the findings down here at the bottom. That was my SSIS DB a very well here, let’s go down to this one. Here. We have an index experiencing some latch contention, duplicate indexes discovered. Whatever you select down at the bottom, you’ll see the details of what it discovered as a potential problem and give you the details of when it’s a problem. Sometimes they’ll even tell you when it would not be a problem. You get different schools of thought on that topic. I always pay attention to these buttons up here at the top. When I click on different findings, notice here with the duplicate index, discover the optimized script button lights up. If I click on that, it’ll actually give me the script. Of course, this is a pretty simple one, but it’s dropping a duplicate index in essence, at this point, and you could run it or copy it out to share with the rest of your team. You might even decide to export these findings and export out to Excel spreadsheet. It will literally take all the findings recommendations. Sometimes there’s even a learning links out to other articles on the web that backup our findings and recommendations as well. And then she can export that out. You do have kind of a red tape scenario where you have to share this information with the rest of your team. You can give them a copy of an Excel spreadsheet format, and then you can kind of go through it together and decide which ones you want to accept or not accept. There are some of these findings are also focused on query tuning. We kind of had some Jew as far as the finding texture, generic labeling to that, but we get into the details as we go down like here we have no join. Predicate has been detected. A search predicate contains the column expression and a function use of select star. Obviously that would return a lot of data, perhaps data we don’t even require. It’s identifying that as a potential issue and here’s the cursor can be optimized for better performance, whatever you highlight in the list. Again, appear at the top. We look at these buttons and like here we have the show problem. If I click on that, it’ll highlight in the statement, where I need to go to rewrite that query. Usually it will highlight one command in blue and the entire line to be rewritten in yellow, but this being a cursor highlighted the entire section, both blue and yellow to be re rewritten for that entire section. Anyway, you can take this information, share with your team. Perhaps we get some other recommendations about the blocking condition that we know exists from the dashboard view. Let’s say we’re going to go in now and address that issue. We’ve already reviewed the details of this prescriptive analytic report.

Session Reporting

Going back to the overview screen, we’ll get out of history mode here. We’ll go down here to the sessions view, click on the sessions, and it takes us into the session statistics like the response time sessions, the active vital and system sessions, locks, statistics, and under lock statistics. We have different sub categories, weights, average, wait, time, deadlocks requests, timeouts, and then block sessions. This point, knowing that we have the blocking in the background and just to be candid with you all, I created the blocking for demo purposes using a table lock, essentially, but you’ll see that my blocking condition is not unlike what you would see in your own environment. It’s similar to what you would see, even though I’m simulating a blocking condition here, but real life scenarios and blocking can certainly escalate or, a blow up or snowball am I start off as one block process blocking another process, and that can turn into more processes being blocked or multiple lead blockers would many block processes. What we’re seeing here is just my simple blocking chain that I created. You can see it’s been persistent for quite some time. I actually created this before we started the presentation today, and that’s the blocking chain that I see right now. Like I said, this started earlier today. I might want to go back and see, how did this blocking conditions start from earlier today? Just using native tools, that can be more difficult to forensically research, those types of issues. Here what’s so simple here is that I can bring up my history browser, just clicking the little history browser link in the upper right-hand corner. It brings up a calendar showing today’s date along with the entire month. I can go back to yesterday or a week ago or a month ago if I want to. For today’s date, I have the snapshots of performance gathered throughout the day. I can go back to earlier today and I’m just going to go to the top of the hour when we started this presentation. As a matter of fact, you can see right here where it started to escalate. When I started to build up the execution of the blocking condition before we got on this presentation today, that’s not far enough back though. Obviously the blocking chain is still there. I’m going to take steps back though, using the little arrow keys at the top. I can also just pick the times down here in the lower, right, but I’ll just take steps back, clicking a little back arrow, and we’ll go all the way back to the beginning. Notice that’s where it first spiked up. This was at 9:49 AM. One 11 session, one 11 blocked 1 29. We can scroll to the right to see the status as if were sitting there at the console. At that point in time, the logged in time, the last activity time, the database name, we could pretend that’s that payroll database. For example, I know I put it on the north wind and the good old fashioned Northwind database, but we’ll just pretend it’s the payroll and just scrolling on to the right, the CPU, physical IO memory, the application, the user, the host system, or better yet, just to see a little block detail report. You can click the little block report in the middle of the screen. It pops up in this window and you see the blocking process, the block process, user database application, the last command executed on both the blocking and block process. We can also export this detail out to XML to share again with other audiences, if we need to. All right. We share that with a application developer perhaps, and they might come back to us and say, this is great information. You’ve given me the prescriptive analytics and all those details in that report, giving me the different findings and recommendations. You’ve identified this blocking condition around the payroll application, of course, but what about other sessions that may have caused some side effect performance issues that led to this blocking condition completely irrelevant to that other condition? Can you show me in essence, all the sessions that were active, sleeping, consistent suspended at this point in time? The answer is yes, we just right click that lead blocker. We want to go to view session details. You can also just click the details at the top and get used to my way of doing things. I just right click that lead blocker, select the view session details. To me, right to where I need to be. There’s the session that is blocking. As a matter of fact, if I move off of that session, notice it highlights it in red to help draw my attention. I’m not having to deal with all those, sleeping and suspended sessions, trying to, weed through it, to find the problems it’s highlighting it for me. It took me right to where I need to be in of course, down at the bottom. We also see the details of user database application, the last command, all of that. As I go down the list, I see the status of all my sessions. There’s one session that was active, and that was actually our collection process to collect this data, but there’s not really much else going on. We pretty much put the nail in the coffin here that there wasn’t something else happening that was related to this performance issue we saw with the blocking condition. I’m going to share this information with the payroll application developer by simply right, clicking and export to Excel. We give them that information bundled up with the prescriptive analytics and the evidence of the blocking and all of that. We still need to eliminate this blocking conditions. We’re going to go back to real-time mode and we can certainly eliminate it here if we wanted to, or we could go back to the blocking view, see there’s yet a sub level menu set up here at the top. If I go to the blocking view, there’s that blocking chain persistent, would that table lock? I’m just going to take that lead blocker, right click. If you’re an administrator in the tool, you had these options, we don’t take security lightly, but as an administrator, I can actually right. Click and kill that session if I want to. And it will eliminate that blocking condition.

Viewing Historical Data

You can see now dropped off to zero block session. So everything’s back to normal. The blocking chain is gone, but let’s just say, I step back into this later in the day. I wasn’t involved in the original diagnosis and I want to go back and research the problem from the past. I always have that history browser you’re on the right hand side where I can take those steps back in time to bring up that problem from the past and drill into it as if I was sitting at the console at that point in time. When you’re in history, browser mode, the entire console changes to history browser mode. You can switch between the various tabular views and see collected data as if you were sitting there at the console at that point in time. When you’re done looking at historical data, you just click this little real-time mode and everything goes back to normal.

Analyzing SQL Query Performance

Okay. We’re going to just kind of move across these tabs at the top. The next tab is the query monitoring. We have about seven or eight different ways to analyze query performance in this tool. Like I said, the product’s been around for quite some time. As you can imagine, we’re always going in and adding new insight into performance, new ways to analyze performance. With queries, that’s really where the rubber meets the road. It’s very important to have different ways to slice and dice that query performance. We have these little menu options here, signature mode, statement mode, query history, query weights, and even with query weights, wave query weights over time and query weights by duration, which you’ll see in just a second.

Statement mode

I’m going to start off with statement mode. I know we’re on signature mode here, but we’ll start with statement mode first, because statement mode, what it’s display. You have a filter option here, but you have your queries that were collected in the given date range in time period that you select in your filter down at the bottom. It shows the queries that were captured. What we’ll see is the performance of individual queries, how they performed, who executed them when they executed them from which client or workstation I run scripts. That’s why you see the client as the server, because I run it directly on the server, the scripts, have you seen any yellow on a cell that yellow indicates that value falls into the top 20% highest impact of all the values in that column? Yellow is something you should pay attention to probably red. If you see red, that means that the value that is lit up in red is in the top 5% highest impact. That is especially concerning more than likely. Granted just depends on what column we’re looking at. You have CPU reads rights, it doesn’t necessarily mean there’s a problem. It just shows it’s showing you that statement stands out in that particular area performance among all the other queries. If you want to look at the query, you can right click show the query, text details, it’ll show the entire command, including the parameters that were executed. The same statements that we’re viewing in this view, we can go to the signature mode and what it will do is consolidate these statements. If you have a query that executes 200 times in the given date range in time period, what we will see here is a single reference to that statement with, and I say statement, but it statements, store procedures and SQL batches, as you can see here, but you’ll see the number of occurrences. Now we’ll view the performance more from an average perspective.

Signature mode

Okay. You get the performance from that perspective. You also have just like on the statement mode, signature mode and statement motor very similar, but the signature mode is consolidating queries where statement mode is showing individually executed queries, but both of them look similar in the way they appear on this view. In statement mode, you’ll be able to go in and look at different performance by these different areas of database application, user workstation, sequel command, and you have all these areas of performance. If you need to chart out some of the performance, as far as your top five statements based on duration, or here let’s go with maybe application, for example, average duration, average reads, average rides. You can change it up in essence to see different comparisons of performance. In this view, you can also take this statement or any statement and right click and show query history and what it does when you do that, is it just jumps to the next tab over.

Historical Analysis

Went from signature mode to statement mode. Now we’re at query history, but because I right click on that one statement, it focuses purely on that statement to show me the history of that statement. I’ll perform it again. The given date range and time period, which you can change this to view today and yesterday or last week or last two weeks, if you want to. You’re able now to see the performance of that statement historically by duration CPU reads writes weights. And why is this important? Well, it will help you identify any repetitive trends and performance degradation where there’s any abnormalities in that statement where it jumps up in duration every day at 10 o’clock where you might want to kind of focus on that time period and see who’s executing it during that time period. What’s so different about that point in time. It’s also going to be a benefit to you if you make any changes to either the configurations or to queries or, to that point, optimizing the statement, this will help you understand where you came from with performance and also where you go in the future after those changes are made.

Query weights

Of course you can export these details out to Excel or save out an image if you want. We have the query weights and I don’t have a lot going on here, but I, my corny weights over time and query weights by the ratio three weeks over time is just showing us what the recent activity has been with different weight categories here on the weights category tab, or by weight activity, around certain statements, applications, databases, workstations sessions, and users. You can go to quarterly weights by duration or more in the heat of the battle. You want to see really what’s going on right here right now. Notice now it’ll change the view to kind of a side bar chart. And I had my application names here. If an application stands out, well, we can still pretend that we’re looking for that payroll application, but it’s a hypothetical application, but you click on the application from the list that draws your attention. It drills down into the weight category categories for just that one application and nothing else. We’re not muddying the waters with anything else going on. If you’re really focused on isolating a particular application or a particular user database, you can drill in however you want. I just chose the application tab to drill into, but see now I, my weight types from just that application, if something stands out, I click on that category and it takes me down to my statements for that weight type, that application. When my scripts that run, it was a SQL command, obviously, as you see up here, but I haven’t run a DBCC check DB on a database here. So that’s what we’re seeing here. That’s why we had some wait time there around that. You can drill into that and see, in the associated database, this is DBCC command. It’s given me the innate there for that. Most of the time you’re going to see the actual database. That’s running directly a statement against the database. I can see my workstation, my session and users as well.

SQL Query Tuner & SQL Workload Analysis

I’m going to kind of deviate from this console for just a moment. I want to show you a tool called a SQL query tuner and SQL query tuner. It gets bundled with another tool called sequel workload analysis and bundled with SQL diagnostic measure core, the core product and what we call CQL diagnostic manager pro. You get these additional benefits of SQL workload analysis, which is more of a statistical analysis tool around your queries and query performance. I want to focus that we have a short time today on, so I picked one of the tools and I want to focus on the SQL query tutor because I think this is a good tool to present, to demonstrate for both deviates and developers. Like I say that because this tool for one thing it’s very easy to use can be installed on, everyone on the team, on their cons on their workstation. They can have their own little, a SQL query tuner console, but like if you’re a developer and you wanted to script out objects, it identifies everything on the instance and you can expand and see the objects and whatever you want to extract you just right. Click and select extract, and it will script that out for you. It’s easy to, basically pull that object. If you want, if you have other instances, you can change it up here to be a different instance and click execute, and literally create that database now over on a completely separate instance, just like that in a matter of seconds. You have some of those features like for developers, but the other benefit as the name implies sequel query tuner is the ability to profile performance for a period of time and then tune any problematic queries. The way we profile, I already ran it earlier, but I’ll just show you we’ll right. Click and select profile, as in some select the data source connection for that instance. It will start watching what’s going on this instance. I don’t have a lot going on here, but you’ll see some of the activity, including even the collection activity going on with diagnostic manager, but anything that’s going on, it’s going to pick that up. It wasn’t let that run for now. Like I said, I already had run this earlier or profile earlier, you profile for a period of time, you can keep it running for an hour if you wanted to, or five minutes or 20 minutes, but then you come back here and you can isolate different periods of time. Like maybe I just want to focus on the spike, whatever I highlight it, zooms in everything changes on the screen to isolate just that little period or that little time window. Okay. So it’s not essential. You can be able to, select that and you can also move these windows. Like if you wanted to move this little time flies over to here, then everything changes to reflect that as well. All right. Just going back over here, you can take a statement and right click and look at the execution plan and you’ll see the cost associated with every step of the operation. The IO costs, CPU costs, rows size, rows, and executions. There’s another statistics here as well as you can see, this is the summary overview screen for the event, weight types and session activity, but there’s also tabs for overall SQL activity for this entire time, slice the wait events to show the overall database activity, the workload sessions and swore procedures. Even here, if we wanted to, we can tune those store procedures as well, but you can also take individual statements and tune them. If you do, what’ll happen is they’ll bring up a new tab like this and you can have it perform a detailed analysis. Also test, what it’ll do is it’ll go through and look at different ways that it could potentially rewrite that statement to improve that performance. You can see the different query hints and everything in the preferences expanded here. It’s got all kinds of different case generation options here that you can pick and choose from since I’m here. Also, I’ll just show you that you have these thresholds for improvement, 10% above the actual, the original execution performance will be green. If it degrades by greater than 10%, it’ll be red. I go in and perform a detailed analysis and have it actually test, notice it highlights green and red, what the performance was and blue is just within the plus or minus 10%. Of course, I’m looking for green to be where, where we need to focus our attention. Like I have this hash join query looks like it improves significant from the original query, which is the first one, but it stepped through all these different test cases. So we’re 149 different test cases. I went through each test case and it shows what the performance look like with the rewrite of that state and in different ways. Also notice at the top here, it has indexes, it shows different findings around indexes. To get more detail around that, we can click the little analysis tab here and at the bottom. You’re going to see, let me expand it . So you can see more detail here. It’ll show you with this particular statement that I tuned, which indexes were useful to the DB optimizer, which indexes could potentially be used in any indexes that were useless to this particular statement.

Get Expert Recommendations

And it also provides recommendations. Indexes suggested to be created as well. If you want to choose them to be created, you can select any of them, right. Click and just create the indexes if that’s what you want to do. Going back to the overview screen, though, for the test cases, it went through 149 different test cases to rewrite that statement. I can see the performance, this hash join query performance looked a significantly better than the original statement. What I’ll do now is compare it to the parent and you’ll see what the original statement was on the left and highlighted in blue is the rewrite on the here on the right-hand side, which if I want to go ahead and test this, go and do a select all copy. Now here we get into some of those features that the Developers like again, like doing load testing, for example, around a statement where you can simulate a certain number of parallel sessions running for a certain period of time or certain number of executions with a fixed delay between each iteration or a random time interval between each iteration. You can also just run it one time at the top. There’s a little notice here. You have the SQL button click on that, and then, select your database, perhaps from the list paste the statement in, and here we had the little execute, we’ll get our results back. Now, I want to know the performance that we saw from that. What I can do now is go back over to the profile activity that we kicked off earlier. Of course, there’s a lot of different things going on this instance. I want to focus only on what I just ran. If I hit the drop down, I can filter by application command database. However, I want to filter. I’m going to select the database and choose that database, and then refresh. And there’s that one execution. I might go back now and grab another one of those statements from the list and execute it and do a comparison across multiple statements that have been, rewrites from the original state and to see which ones I liked the best. When you’re done click the stop and it’ll quit profiling the activity on that SQL instance at this point. In a way there’s a lot there as far as query monitoring, obviously, like I said, about seven to eight different ways of query analysis moving right along. I know we’re running short on time, and I wanna just do some wrap up at the end here with any Q and a, but there’s the resources view, which focuses on more of the hardware aspects of your environment.

Hardware

If you memory disk activity, this is the summary of the high level performance. If you click one of these charts, it’ll drill down. Like I clicked on the memory charts. It takes me into a lot of the memory charts that deviate is like, look at like page life expectancy, buffer, and procedure cash, of the working set memory. These charts can also be expanded just by clicking the little expansion button at the top, right corner, just performance. There’s all kinds of disk metrics that we can put these charts just busy, measured to a hundred percent disposition from disk averages. Qlink SQL server, physical IO dis transfers to discretes per second, per disc there’s disc sides information here as well, by the way, all this data also includes reports. We’ll get to that in just a second, but you can also go in and just analyze the drives on the fly here and notice it shows the size of the drive, the amount of space consumed on the drive, just overall the amount of space consumed by data files or transaction log files. All of that is exportable moving right along. We have the databases tab here with a summary of the overall database status on instances, the amount of space consumed the current status as far as whether it’s accessible or not. I personally like to use this view to do database comparison. You’ll notice like if I select some of these databases, it adds them to the charts, which I can then expand. Now I’m able to do a side-by-side comparison of my databases, right? From, in a visual view, right? From within this console, I liked it to because I’m always going in, playing round testing in my environment, all inflate databases and things like that. If I start running low on space, I want to get a nice visual view of where my space is consumed. As you can see, this will do that. You can also at the drop down and look at your transaction log sizes side by side. If you want to there’s availability group analysis, you don’t really have to pay attention to that part of the console, because we’ll alert you to it. This is not an availability group in particular. In fact, though, I will show you, hopefully if I can, I may not have connectivity to this one. It’s not my instance. We’ll see, looks like I might though. There we go. So you get a view like this. You don’t have to really pay attention to this though, because we’ll alert you to the synchronization health fail overs performance in general. You’ll get those alerts without having to constantly monitor the monitor this all the time, but it is nice that you have that dedicated view.

TempDB

There’s attempting to be summary view where you can go in and look at the size of your data files for temp, DB, temp, DB contention, versus storage generation and cleanup rate down at the bottom. You see your sessions directly interacting with temp TV. It will alert you to performance around this as well. You don’t have to babysit this view, but like if a session is consuming 20 gigabyte attempt DB space, you might want to know about that. We can get that alert and then go in here and see what’s going on there. Even what the statement is, that’s executing from which application. If we need to, we can kill that session. If it is a problematic session on temp DB services. Before I do that, I want to show a couple of other things here. I know we’re running short on time, but I want to definitely show you these views as well.

Database Backups and Restores

The backups and restores. We don’t store this history by the way, in our repository. As you can imagine, that would add probably some substantial blow when you multiply MSTB backup and restore history times, all your instances. However, we do monitor in alert for days since last backup with our polling. So we’ll alert you to that. If you wanted to, you could go into this backup and rescore view, and it will directly query MSTB to present the history of your backups and restores on particular databases. And this can also be exported.

Index Fragmentation

Also, this is the tables and indexes view. You might remember from the beginning of our presentation today, I talked about a fragmentation as something we monitor, and we certainly do, but you can also go in here and select a database and look for your current fragmentation. We call them percent fragmentation, quiet time, because we’re analyzing this during whatever your off peak time is. You can choose what it is, what time of the day, but by default, it would be like three 30 in the morning. And so it doesn’t once. You come in the morning and you have your alerts around any fragmentation. This top table here, it’s my alerts table. I actually picked the secret DMD repository. It has some fragmentation there and it looks like I need to run a defrag. I select that table down at the bottom. I see the breakdown of the size, the dependencies indexes, overall index statistics. You can hit the drop down here to go into individual index statistic details. Right here I can, right click and rebuild the indexes on the fly. You might even decide that you want to, set up some bureau alert actions around that fragmentation alert to trigger a response to that type of an alert. We also have a separate tool called SQL defrag manager. You might be interested in reviewing that also has more of the set it and forget it, intelligent policies that target fragmentation. You don’t have to really worry about going in here and doing this the way I did.

Services View

So anyway, that’s the database view. I’ll do one more tab here real briefly, and then the reports. We’ll go ahead and wrap it up. The services view. This is where you get like the replication monitoring or, and we have mirror status monitoring as well, but the SQL agent jobs, that’s something I constantly hear. Everyone wants to know if there’s job failures or if a job is taking excessively longer than normal to run, we monitor that through this tool, but, and again, you don’t have to babysit this view to get that type of alerting. We’ll tell you about that, but you can go in and pull up the SQL agent job listing. Again, this data is not stored in our repository again, because it would add the bloat we’re monitoring around the jobs anyway, through our, our remote queries. If you go in here to the single-agent job section, it does make its query to enumerate the details here on this view. You can pick a job from the list and it shows the history down below, including any of the messages for any steps, which can then be exported. All right. We’ll wrap it up. And that is the reporting section.

Custom Counters

I will say with this tool, if there’s anything that you want to add above and beyond what we provide, and we do have the custom counter wizard that will allow you to basically add in anything, any perf Mon WMI counters, SQL server counters, SQL scripts, hyper V, or VMware metrics or Azure counters, Azure system counters to be monitored above and beyond what we provided. It’s very moldable around your own environmental needs. Just wanna throw that out there before we go into the last section, which is the reports, and there’s all kinds of reports. Like I said, the report the product’s been around for over two decades. You can imagine that over those two decades, we keep adding more and more reports to Polish up data for different audiences.

SQL Performance Monitoring

The monitor category is more of the real-time reporting, analyzes your historical data planning looks into the future monitor. First thing in the morning, you might click this enterprise summary status and just check out what the green, yellow red status is in a nice report. These reports can be published and reporting services. You can set up subscriptions to email you the report on a regular basis, or you can just run it from the console like I did. We can export the report to Excel or PDF or hit the print button, but you can see here, the status of the instances, of the performance overview details are at the bottom. If something’s critical and you want to know why you can click and interact with these reports. If I click on the critical status, it’ll take me to yet another report and automatically submit the, the details of the server or the tag, whatever I selected on the previous report and a lot of magically embed that and to here and execute the report. Now I have that instance and a list of all the alerts, the current active alerts that are causing it to be critical. I can also click on the instance and interact with that report. It takes me to the server summary, show me how that instance has been performing so far today with response time, CPU, activity, OSTP, memory usage, and busy activity. And like I said, there’s the analyze. See here, let’s go back to the main menu. The analyze section is more of a historical data, like worst performing servers over time, worst performing queries, the alert history, worst performing databases, disk space usage, but we can click on here. I’ll go with the top databases report here, choose a report, choose the instance as some of these are focused on enterprise level, like across many instances, this one is focused on just databases, a per instance basis, but each report has different perspectives, so to speak. We’ll just look at the top databases based on last seven days, maybe the top 10. Now we get a nice little category style report showing top databases by overall data size data growth. There’s other statistics next to each of these areas of focus, top databases by reads writes transactions, weights, and you can click on anything in the report again, and interact with a drill into, to see a, yet another report that focuses purely on whatever database you select it. Now drill into the database statistics report. I went from the transactions per second section of that previous report. It automatically put that as my chart type. Now I have my chart down here showing me the last seven days of transactional activity day to day in hours, or I can change it up to look at days if I want to more of a flat line view here to see if we’re having any fluctuation day-to-day.

Wrap-up

So, anyway, that is diagnostic manager. Now I know we’re up to our limit here on the time we’ve allocated today. I’m going to switch back over here and just look at the question panel.

Q&A

Hopefully this has been useful to you all. Let me see what we have. All right. I have a couple questions here. Let’s see. Okay. Actually, it’s one question here. What impact does query monitor have against the monitored server? The query monitoring, we have different options available for collecting query data. You can use extended events, you can use the old school SQL trace. Like I said, the products have been around for two decades. That’s the original way we capture data. We also have query store as an option with newer versions of SQL server. The impact we typically say is, two to 3% impact just generally speaking. The, as far as what the query data collection is doing with the areas that I consider, as when you collect query data, you can choose whether to gather actual execution plan or estimated execution plan. If you gather actual execution plan, I’ve noticed more impact. I think we all can agree that’s something that would potentially happen if you’re downloading action execution plan, estimated execution plan is more of a statistical gathering and as less invasive to performance. I would recommend using a estimated execution plan for day to day capture to minimize performance impact, but typically with the overall data collection of this tool, we’re looking at, two to 3% impact on average. Okay. The other question, there were two questions, but the other one is more of a, the person that is asking is basically has already installed the product and has a basically question one of their views. I would suggest if you have any issues that you run into any questions, please get with the, your account manager. They can either get a technical resource on to assist you or open a support ticket if there’s a problem. But, basically on the demonstration today, I wanted to demonstrate these features, but we can take a more one-on-one approach with any of you, a more formalized approach and discuss anything that you see in your environment with what, with this tool as well. That really is the only couple of questions that I had here today.

Try SQL Diagnostic Manager Free for 14 Days

I remember you can go onto www.idera.com to download the product in the product dropdown list. There’s the product called a SQL diagnostic manager for SQL server. We also have sequel diagnostic mention from my SQL. To clearly differentiate between the two, I want to make sure you understand that it’s the SQL server that we’re looking at today. And so you download it. It takes about five to 10 minutes to install. Once it’s installed, you hit the ground running. It has its own trial license, key embedded, and you’re able to take advantage of the features you saw today, fully functional for 14 days and up to 15 instances. If you need more time again, you can talk to your account manager and let them know. You need more time with that. I’m going to go ahead and wrap up our webcast. My appreciate you all attending. Keep in mind that we do webcasts throughout the week to week. We always do these little demonstrations as well, but I have a team of engineers like myself that we kind of take turns doing presentation. You’ll get different flavors of the presentation. If you come back and watch another on another day. I encourage you to watch the event calendar for D four Idera and look at what’s coming up and attend those webcasts as well. Other than that’ll wrap up our webcast today. I appreciate you all attending, and I hope you all have a great day.