Geek Sync Webcast : Dealing with Network Latency Issues and the impact on SQL Server Performance

About this webcast:

Greg: Welcome to the MSSQLTips webcast ”Dealing With Network Latency Issues and the Impact on SQL Server Performance” sponsored by Idera. I’m Greg Robidoux, co-founder of MSSQLTips and today’s webcast host. Joining me today is doctor Dallas Snider who has contributed any tips on SSQLTips, presented several webcasts about SQL Server. Today he will talk about network latenc-latency issues and impact on SQL Server. Before we get started, if you have questions you can submit them at any time during the webcast by entering your question in the question area in your presentation control. We’ll try to answer as many questions as possible in the time allotted. If you have any technical issues with sound screen quality during the webcast, also use the question area to let us know. If you’re not already a member of the community, we invite you to join. You can go to and sign up for our newsletter, read any of our tips, download white papers or watch archived webcasts, as well as post your SQL Server questions to be answered by our community members. This webcast will be recorded and archived for future playback and you’ll receive a followup email with the links. Can we go to the next slide?

After the webcast, please take some time to visit Idera’s website at to learn about all their SQL Server tool offerings, as well as download some of their free SQL Server tools. Idera’s latest SQL Server offering SQL Traffic Accelerator. This tool allows you to reduce bandwidth requirements when transferring SQL Server data over the network and dynamically compress the data, as well as offers point to point data encryption. Today Dallas will talk about some of the best practices and ways to reduce data transfer over the network and with the additional SQL Traffic Accelerator, you can further increase performance and reduce latency in bandwidth requirements. Simple install and doesn’t require any recoding. So again, after the webcast, please visit to learn more.

So at this time I’m gonna turn things over to Dallas ad he can begin the presentation.

Dallas: Hello everyone, I am Dallas Snider. I am an assistant professor in Computer Science of the University of West Florida. I was previously employed by Northrop Grumman Information Systems, Acxiom Corporation and Euronet. I have been a contributor to since January 2013.

So, part of the, uh, talk today will be on how to approach network latency issues, query options at SQL Server Reporting Services, performance considerations for SQL Server Analysis Services, network issues and SQL Server Integration Services and troubleshooting techniques for network latency issues.

So let’s start off by defining what latency is. Latency is the amount of time for data to move from point A to point B. Some measure latency in terms of the time it takes to make a round trip from point A to point B and then back to point A. Regardless of how latency is measured, there will always b-be some miniscule amount of latency. High latency creates bottlenecks in networks which decreases the effects of bandwidth which then leads to increased processing times. So the end user of a database application, high latency can make an app-application appear to be responding slowly or just completely unresponsive. Time out messages might also be encountered. There is not much good to say about high latency in your network. So lets look at how to approach latency issues.

Network latenc-latency issues which affect database systems can be difficult to pro-to prove for several reasons. First of all, there are so many moving parts. We have servers, routers, firewalls, switches, storage area networks, also known as SANs, quiet devices, anti-virus software, and we also have virtual machines. And the network is heterogeneous. We have wired components, we have wire-less components. We have different vendors and we have different bandwidths. Also, depending on your role in the organization you might not have access to all of the information about the network. If you are a database developer, you usually do not have rights to database administration tools, or tables with performance data. Database administrators might not have access to network diagnostic information and network administrators typically don’t have access to database performance data. Often it takes some degree of negotiating skills to get everyone in on the conversation. That would be your database administrators, your database developers, your network administrators, system administrators. So you want everyone to get involved to look at the problem. Now, as a database application and business intelligence developer, it has been my experience that network administrators with database admi-with database experience will question, they question your database queries, your configurations and your processes. While this is fair to question your methods and processes, it can be frustrating when you have done all of your homework and deduce that the bottleneck must be occurring in the network ,so to be sure that you have all of your bases covered, you must make sure you are sending a minimal amount of data across the network. You also must eliminate contention with other processes on your database server. Also, your processes could be affecting others’, and vice-versa. So before we get to the more technical portion of the webinar, let’s take a few minutes to look at some obvious and sometimes overlooked issues.

First of all, we want to reduce the amount of data transmitted over the network. Are you using select star are you using a where clause? Are you using the where clause effectively? Are you minimizing the use of large objects? Now, this does sound simple. Are you using select star instead of specifying specific columns? Are you using the where clause? But think about this. Have you ever been in a hurry when you’re writing a sprint or writing a process and you forget to go back and fix the query, and have table changes occur. Think of about this, you write a query, it works well, the table is small but over time the table has grown and now you’re sending more data across the network. Maybe we need to go back and look at these existing queries and try to reduce the amount of data sent across the network.

Next, lets look at SQL Server Reporting Services and some of the query options in SQL Server Reporting Services. I’m going to switch over to SQL Server Reporting Services just real quickly.

OK, so this is SQL Server Reporting Services. This is a simple report that was pulled. And the report shown here, from the adventure works 2012 data warehouse database, we want to only display bu-sales data on the bytes. K? So here’s a product category and we just want to just display sales data on the bikes product category. Now SQL Server Reporting Services will allow you to write a query to create a data set and then filter the data set on a value after the query has executed. So in our sales data here, in our data set, we can filter, and so here we have a filter set up to only display those, um, products, or those bikes, where the color is black. The problem with this is that the query has already executed and pulled all the data across the network and now you’re asking it to filter.

Another problem that we can run into with SQL Server is with setting up-is with the tablets object itself. If we look at the tablix properties, it too, will allow us to filter on network data, so whether we are filtering on the tablix or filtering on the data set, this-these scenarios cost potential problems because there’s a potential for large amount of data to be transmitted over the network before filtering. A better way to do this is to have a parameter in the query to reduce the amount of data set across the network. So in the actual query, if we go down here to the bottom, I hope you can see this, we actually have set up a parameter. And so we have set up the parameter and so we have eliminated the amount of data coming across the network simply by using the parameter.

Now I’m going back to PowerPoint, and I will … just one moment.


Alright, just one moment, I ….


OK, sorry about that. Next will move on to talk about SQL Server Analysis Services. SQL Server Analysis Services, or SSAS, can consume considerable amount of network resources during dimension and cube processing. To ensure that you are not overloading the network, and your servers too, make sure you have a minimal amount of data in your dimensions. Do you really need first name or street address in your cube for analysis. Keep those attributes in your dimensions that truly can be used for analytics. If you need more details, consider using drill-through reporting to access the details that are stored in the relational database.

Next, queries to build dimensions and cubes can generate an enormous amount of network traffic. Look at using SSIS to spread out the processing by dimension and then processing the cubes. I have utilized this and it has been very effective to reduce the amount of traffic on the network. Also, just like TSQL queries, MDX queries and Analysis Services have from and where clauses. We wanna make sure we use those effectively to reduce the amount of data returned.

Next, I wanna talk about issues we find in SQL Server Integration Services. While SSIS allows us to create packages and processes within those packages that run a parallel, we need to ensure that we are not trying to push or pull too much data within the network simultaneously. So let me switch over to SSIS. OK, so what we see here is we see a control flow. We have data read data flow tasks. Within each data flow task, we have two reads and two writes. So there can be six reads and six writes occurring simultaneously. This has the potential to cause network bottlenecks especially if one or more of the tables are growing on a large rate. Also, as we see in this next example in our data flow task two, the-the potential for network latency issues still exist, if the data source is a file. Furthermore, copying files from one source to another across the network will consume resources, so try to copy only compressed files. Also, try distributing the workload by making one pass dependent on the completion of the other. And again, these detail tools such as SSIS, this will let us do a lot of things in parallel but we really have to be careful that we don’t overload our system. Also, we can check job schedules and steps within these jobs to see if any jobs can be spread out in time to invoid-to avoid any network bottleneck issues.

Next, we will move on to discuss log shipping. There are several options to configure transactional log shipping, but to the need to be considered most are the frequency of the log shipping job and whether or not to use compression. Now compressed logs ain’t co-, just like compressed files, will transfer faster across the network but we must make sure we are not processor constrained so the compression process hinders other processes running on the server. Now this distribution of the workload also applies for replication, so when we talk about snapshot replication, we want to try to run the snapshot agent only when necessary, and at off-peak times. Again, if the snapshot can be compressed without too much memory and processor overhead, than a compress snapshot should take less time to transfer across the network. Also, we need to publish only the data required.

Now, RDP, while it is convenient, can introduce network latency issues. RDP consumes network bandwidth because it is transferring the screen data. We want to avoid using an RDP session for file transfers or other data moving operations. Also, again w-we want to avoid using RDP during peak times. Because we want to try and reduce the latency and increase our effective bandwidth. So one of the suggestions is try to create a job using SFTP, that’s secure file transfer protocol, so there’s not a need to log in to the server via RDP.

Next, lets look at troubleshooting techniques. One of the first things we need to do is utilize our job logs. This can be our simplest troubleshooting technique. We won’t look for any spikes in our processes elapsed times, especially if you have processes with a number of records are somewhat consistent. Also we want to analyze Database Mirroring and Replication counters in the Performance Monitor.

Als-next, we want to examine the sysjobs, sysjobhistory and sys.dm_os_wait_stats. These are very important views-tables that we can look at to help us troubleshoot. And I’m gonna go trough some examples. Uh, so if you bear with me, I’ve got to change my screens again.


Now, this query right here, this looks at our job runtime. I’m gonna go ahead and execute this query. Now I don’t have a lot of jobs running on this server in particular but the idea is to we can look at the jobs running, we can look at the run duration in minutes or in seconds or how we-we wish to display it. We can then accumulate statistics on our job runtimes. Same thing with our step runtime query. Again, we can look at each step, look at the step name and we can look at our run duration. We can look for any spikes. We can-we can put this or import this query and to SQL Server Reporting Services so that we can have a nice graphic. We can look for trans we can look for spikes, we can look at the patterns and how they are occurring. Again, this is a good indication, uh, when there is network that, well, it could lead to network latency issues. Worse, there could be other processes running on the server that are causing these spikes and the elapsed times.

In addition to the job log, SQL Server has a dynamic management view that allows you to query for wait times. This view is named sys unders-sys.dm_os_wait_stats. And we can see the different columns. S-so in this view the columns are the wait type, which is the name of the wait type, and according to the le-latest documentation, there are four-hundred and three dis-distinct wait types listed in the SQL Server 2014 documentation, and these wait types fall into three categories: Resource waits, queue waits and external waits. Network waits are considered to be resource waits.

Next in this view, this waiting tasks count, this is the number of waits on this wait type and this counter is incremented at the start of each wait. Next, we have the wait time in milliseconds. So-so this column stores the total wait time for this type of wait, and this time is inclusive of the signal wait time which, well, which I will describe in just a moment.

Max wait time is the maximum wait time on this wait type in milliseconds, and the signal wait time underscore ms is the difference between the time that the waiting thread was signaled and when it started running. So like I said, there were four-hundred and three distinct wait types. Searching for the word network in the documentation on the wait types produced the nine wait types shown in this query here. So when I execute this query we can see the different wait types. Let me clean this up just a little bit so we can see all the columns. So as we can see in this first query here, several of these are for high availability and di-disaster recovery as specified by the HADR abbreviation. High wait times or counts for a specific wait type can indicate a bottleneck in a particular area. For this webinar and the demo I’ll show momentarily, we’ll look for bottlenecks by examining the ASYNC underscore network IO wait type, which according to the documentation, and which I am quoting, this is the Microsoft documentation, occurs in network writes when this task is blocked behind the network, verified that the client is processing data from the server.

So to demonstrate the capabilities of this dynamic management view, I created a table, as shown here, let me bring up the table. So I created this table called network performance, just a simple table, I filled it with random data values. So I created this table and populated it with one million records on one computer on my home wireless network that served as my server. Again, I’m trying to create network traffic, I did not want to do this on the network at work, so I did this at home, so I populated one million records on one computer on my home wireless network and this served as my server. Next, I reset the content of the dynamic management view on the server by executing the foll-this command right here. So it’s important that you reset the wait stats. And so we wanna clear the content before every test query on the client. So the idea is that I have a million records on a server, and I’m gonna query those million records just to test out how this view works. So on another computer on the wireless network that served as my client I executed a query to select all columns and all rows from the table on the server. So essentially did a select star from this table and so this would pull one million records across the network. And after this query returned its one million rows, I executed the foll-this query right here to examine the statistics accumulated by the … by this view, followed by querying the wait statistics again. So I would constantly run the query, then I would clear it. Then I would rerun the query and clear again. So you can see a picture of what is happening every time we run the query. So now we see we have, uh, zero, so we’re clearing it out and then sometimes when I run this, I would get a divide by zero error. But anyway, we’ll move on. So several times I repeated this combination of running the select query on the client followed by querying the server’s dynamic management view and the-then resetting the content of the view. So I have some statistics from this process and let me pull those up. Now this real simple, um, spreadsheet I have here that’s gonna pop up … OK, so what happened was, I was doing this at home conte-conducting this little experiment, so on average of pulling the million records across my home network, the total wait time was approximately 35.79 seconds, the resource wait time was 35.65 seconds, the signal wait time was 0.13 seconds with a wait count of 18.22. Now on the co-now these statistics come off the server. I also set a copulated the elapsed time on the client side so as we can see it’s just a little higher, 37 seconds.

Next, I increased the network traffic by watching a replay of a segment of a late night TV Show in the browser of my client and then executing the distinct queries several times. So essentially I ran this query under, pulled a million records several times, took the average under what I consider normal conditions. Then I ran the same experiment while watching a-a TV Show over the internet, and we can see that the-the view were able to detect hat there were issues, or there were bottlenecks, in the network. So we see that our wait time was increased, our resource wait time was also increased, the signal wait time, though, did not increase, and I also found that it was interesting that wait counts also did not increase. That the average wait on the client side was also increased.

Now while I was conducting this experiment, I noticed an unexpected spike in the times. So the average wait time jumped to 1.25, after walking around the house to try to figure out who else was on the network, I discovered that our exchange student was on a Skype video call with her parents. And so, as we can see, this process of querying and resetting will help us to figure out truly if there is a network bottleneck. And we can also see that, we can also look for error, uh, for bottlenecks on, uh, Db, uh, database mirroring and for other, um, conditions, and so, again, this can be a very powerful tool. Now one thing I did notice in the documentation, let me only bring this up real quick … I don’t know if you can see this, OK, yes you can see this. Um, a lot of the things that a lot of the documentation pointed to was what’s called the network package size. Now, you can’t change the network package size, however, most-uh, and the change in the network package size in hopes of increasing your network performance, however most documentation discourages this. The default is 4096 bytes. Now, I did try in my experiment to increase the network package size that I really did not see any benefit, uh, this pa-network package size can be changed for your query or through the server properties window in SQL Server Management Studio as you see here, but as we see the-the problem is, or what concerns me is that so much of the documentation says, don’t do this, just leave it alone, k?

Now, the one last thing I want to show you is that … this is our performance monitor. So we have counts in performance monitor so allow me to bring that up.


It’s not willing to come up. K?


OK, so the performance monitor is not willing to come up so … so we have performance monitor counters for application and mirroring and other types of events that can be employed to look for bottlenecks. Under replication distribution, we have a group of counts that monitors the distribution agent’s performance and some of these are distribution deliver commands per seconds, it’s the number of commands delivered to subscribers per second, delivered transactions per second, which would be the number of transactions delivered to subscribers per second, and also delivery latency. There’s actual encounter for distribution, uh, delivery latency. This number of milliseconds it takes to deliver transactions from distributor to subscribers.

Also, with mirroring, mirroring also has some counters and for use in detecting bottlenecks, and so there’s one counter called the send receive act time, or this is acknowledgment time for mirroring and it can be used to measure mirror latency between the principal and the mirror servers, and again,what we’re looking for if this value is larger than normal and if it is, it means that there’s a network no-bottleneck between the principal and mirror servers.

So in closing I would just like to s-we-we can see that we have several tools available. We have the Dm_os_wait_stats management view along with our job logs and performance monitors that can help us to determine if slow query response is indeed to network issues or other issues. One of the thing I would like to add is that we have to establish a baseline. It’s very important when you’re troubleshooting for network issues, or network latency that you establish what is the baseline, or what is the normal, because just if we throw out a number of certain latency a certain time, it’s not gonna apply for everyone. You know, maybe you have data center on opposite sides of the coast or on the other side of the world. Maybe you’re using some type of satellite uplink for your data. Maybe you’re using-maybe you’re just sending it across the hall or to the s-server sitting right next to it. So the amount of latency will vary depending on your scenario, on your hardware, on your software. So there’s not a hard and fast this-this is a problem, this is not a problem, it all depends on your environment. So make sure you establish good baselines for off-peak and also establish baselines during peak times and so we wanna make sure we monitor them. Also, one other last thing I wanna say is make sure you that have a mechanism for analyzing the number of records you’re processing. Especially if you’re dealing with SSIS. If-if you’re working with those types of jobs, make sure that you can, if you’re doing lots of table processing, not transactional processing but data analytics processing or data warehousing processing, make sure that you are looking at your records, your processing per second, your throughput. Again, that will also help point you to network bottlenecks and latency issues. And with that I’m going to hand it back over to Greg.

Greg: OK, Dallas. Um, we have a bunch of questions, so, kind of, um, go through those and if anyone else has questions, please submit them in the question section. So, one of the question somebody has is just, um, more and more people are moving to the cloud and what do you see from that perspective as far as, you know, additional ne-network latency? Do you think that’s gonna be a bigger issue or something to be concerned about versus almost having almost unlimited bandwidth if it’s just a local server.

Dallas: It is a concern. Again, where-where is the data- data center where is actual cloud data center. Hopefully, your service provider will let you know, uh, so you can figure out which one is the closest to you, uh, so-so you have that geographic issue there, that-that variable on the geography. Um , but yes, it obviously can be an issue, and so it’s-it’s something again, you need to establish a baseline. Uh, you can expect that it might take longer, but again, if you’re just talking about within the building or now you’re going off-site hundreds of miles away, uh, so yes, there-there would be additional latency, but again, establish a baseline and see what’s manageable and what’s acceptable, and then what is not.

Greg: OK. Uh, the next question is: “On one of your slides you mentioned something about compression with log shipping. Um, is that something you can show how that’s done or an option to select that?” Can we get a little more information about that?

Dallas: Yeah, there’s actually an option on the when you’re s-setting up the log shipping it’s a dropdown box, I believe, or it’s a radio button that allow you to, uh, compress that, and so the-there is an option in Management Studio for that.

Greg: OK, um, another question is related to just overall backups, doing backups across the network, um, and I guess there’s an option now in SQL to compress backups. Is that something that you would recommend, people compressing backups?

Dallas: Y-yes, I would, but again with-with the compression you have to make sure so there’s a tradeoff. Yes, when you compress it will take up less footprint on your server and therefore it should go through the network faster. But the problem is to make sure that when you’re executing that compression, you are not processor or memory constrained so that you are causing adverse affects, um, on your server just for the sake of compressing. So i-it’s a balancing act and you have to do, um, thorough testing and get a good idea of what, um, again, what is acceptable, what is not acceptable, um, just-just to make sure that you’re not, “Hey, this looks good, this runs faster”. It runs faster, the-the backup runs faster, but in the process some of our transactions have been timing out. So again this is something that you just have to-it’s a delicate balancing act.

Greg: OK, um, another question is related to Integration Services and you just talked about how to decreasing the-the network bandwidth there, and the question is: “Does it make sense to run integration services engine on a primary database service, or are you moving and copying a lot of files or should it be a separate server and then only connect this to the SQL database as necessary? Do you have any recommendations from that perspective?

Dallas: Well, I remember reading several articles a few years ago saying that you need your primary database your primary database server should be separate from your Integration Services server, should be separate from your Analysis Services server, should be separate from you Reporting Services server. So if you would have- if you’re using the fully functional Business Intelligence Suite, you would have four separate servers and the problem with that is some organizations can’t afford that. OK, so then you start talking about having four servers and four hot backups. Now we’re talking eight-we’ve gone from one server to eight servers, so again, it’s a balancing act. Can you have integration service? Will it run on a regular database server? Yes, it will. Can it cause problems? Yes, it can. So again, it just, uh, my recommendation would be not to run it, but, you know, again, you might have some constraints on you-o-on you’re budget, your-your resources and so y-y-you just have to try to do what’s best for your situation. A-again, optimal thing would be to have your database server standing alone, your-your traditional database server standing alone and you’re in-Integration Services server also standing alone so they don’t conflict with each other. But yes, you would think if they were running on the same … and again, it depends on the type of processing you’re doing. Are you bringing in a lot of files? Are you-are you picking up files from the outside and then doing the true extract transforming loaded to the relational database? I-I guess it just depends on the situation and the type of data that you’re running through it.

Greg: OK, uh, the next question here is related to storing, um, like binary database, images, import documents, pdf documents etcetera, and that stuff going across the network, is that something that they should try to avoid to cut down on the network bandwidth? Do you have different servers or do you have some suggestions on, you know, storing binary database and latency issues with that?

Dallas: The-the idea of storing the documents, the binary files, the large objects, it’s-it’s fine. How many times are you gonna question it? How many times are you gonna access it? How many times-th-the problem then happens when you start moving it around. And so that-that’s the issue. Is this something stored and will be rarely accessed again or is this something that we accessed numerous times. And, again, I know I sound like a broken-probably starting to sound like a broken record saying it depends on the situation, but, um, you need to-you need to make sure that, um, i-it-how-we need to look at access issues then. How many times is this image gonna be accessed, or how many times is this pdf gonna be acc-will be accessed? And that will depend on how you will-where you will store the image, where you will store this large file.

Greg: OK,next question is related to the wait stats we were looking at the async async network IO and again just a ___ number across the board and somebody is just wondering is there anything in the DMVs that show you down on this query level? I know you-when a query is running you can look at what’s, um, currently active, what the current wait stat is, but is there anything that can show you “this query took so much time for network IO versus something else”. Do you know of anything built into the DMVs?

Dallas: I’d have to get back to you on that. That’s kind of what I was doing with the experiment, and of course, you don’t know what else is going-I was in my house where no one else is running queries. Yeah, so what happens is when you’re in a production environment you need to test the query. I’d have to get back to you on that to-to dig a little deeper to see if you can get that kind of statistics on a specific query.

Greg: OK, and I know, pretty sure the, uh, the Idera SQL Diagnostic Manager tools allows you to see some of those for wait stats for initial queries so you can definitely take a look at their tool and see if there’s something that can help and there’s probably other third party tools out there that click down to that level if it’s not available directly in the DBV …

Dallas: Absolutely.

Greg: OK. Um, and I guess from a, um, another question is related to, just you know, network monitoring. Do you have any suggestions on tools? I know you mentioned Performance Monitor to-to capture some, um, stats there, do you have any recommendations, any other tools that may make sense to just mana-monitor the network traffic?

Dallas: Yes, there is, um, Wireshark. Wireshark I believe is open-source. Um, it can help monitor network traffic. There’s also one called iPerf, and there are several others out there on the, uh, that are, uh, open-source, uh, so you can take advantage of, uh, I am personally now looking-looking at Wireshark, uh, using it for some other, uh, applications. And also looking at-doing some research-helping with some research into network bottlenecks that we currently have going on here.

Greg: OK.

Dallas: Wireshark-Wireshark is the, uh, one I’m currently looking at.

Greg: OK. Um, another couple few have actually asked this about the-the queries that you showed for the jobs. Just wondering can they get-can we get-can they have access to that code to be able to download …

Dallas: Yes, yes, I am glad you-I’m glad you asked that question because I probably skipped over something important. Those queries actually came from There’s a tip written by Chad Churchville, and so there-we already have those out on the-on the site. So uh, so, written by another member of the community and I’m-I’m glad you asked that question because I forgot to, uh, tell you where I got that query from.

Greg: OK, just for everybody as well, we’ll include those in a zip file. You can download the slides and the scripts as well as the actual tip that-that Chad wrote. Um, let’s see …. Alright, I guess another question is, I’m not sure if you have an answer for this is, um, any recommendations on using link servers, say to query between servers? Um, and I guess we’re doing this to reporting services. So I’m not sure if their using Reporting Services to connect to one server and then using link servers. Do you have any ideas or suggestions about link servers get bad, does it cause additional overhead?

Dallas: Honestly, I-I have no experience with that so I will, uh, I’ll defer that question. I’ll-I’ll have to get back to you on that.

Greg. OK. Do you know which-the tip number for that by any chance? From Chad’s, tip from the scripts are?

Dallas: I do not have the tip number.

Greg: OK, no big deal. Like I said we’ll get that out to everybody. Um, and then another question about the-the audio/video, yeah, we’ll definitely, and the slides you can download, the pdf version of the slides but there’s also gonna be, um, this-this is being recorded so you can re-listen to this and replay the entire webcast as well, and again that will be in some emails that you-that you get out there.

So looks like that’s about it for the questions. Um, so I guess overall just in summarizing things it seems like a lot of the things you really need to do is balancing the best practices for limiting the amount of data that is over the network and I guess compression is something that you can use, built into log shipping, built into backups, um, except kind of being smart about using RDP. Overall there’s really not a whole lot you can do, it’s just about being smart, there’s not a whole lot of built in compression built into SQL. Is that kind of a correct statement.

Dallas: Yes, so I’ve taken this webinar as the approach of, you can’t change the network, you can’t change any hardware, you can’t change any software, so what can you as an application developer or maybe you’re, uh, you-you’re a database administrator, whatever your role might be. What can you do? If you can’t change the hardware, you can’t change the software, what you have control o-of changing. So that’s kind of a ___ I-I-I w-was going with today

Greg: OK. Yeah, so I think that kind of plays in well with that tool that Idera’s just released, the SQL Traffic Accelerator, so I’m not sure if I explained in the beginning but what that tool does is actually sit between your client and your server, so all the data that’s being sent across the board, uh, across the network is compressed, so SQL Server ___ doesn’t really do a good job of compressing data, so if anybody’s ever done compressed back as you can see at sometimes you can take a hundred gig file and do that backup and compress it down to ten gigs, so you get a huge amount of savings there. It’s basically the same thing with transferring data across the network with SQL Server, there’s just a lot of empty space and the package that are being sent back and forth, so this new tool, SQL Traffic Accelerator, Idera puts into play actually compresses the data so you can stream more data across the network without causing any latency and it’s really anything that’s connected to SQL Server so it could be replication, um, could be queries from Analysis Services, Reporting Services, your application, what-have-you, so you can definitely take a look at that and see if that will help as well if you are having ___ network issues as far as transferring ___SQL Server. So Dallas if you can just move to the next slide, we’ll just ….

Alright, so another thing I just want to mention here is that Idera wants to invite everybody to connect with them on their new site So on the site you can access free best practice tips, share experiences, connect directly with the Idera product teams, uh, so please check it out after today’s webcast. I want to thank Dallas for this great information and putting together the webinar today. I also want to thank Idera for sponsoring today’s event. The webcast was recorded and you will receive a followup email with the links for the archive, as well as the slides, and also the scripts and also went back to that tip that Chad Churchville wrote. Also invite everybody to join and visit, you can access all of our free SQL Server resources and you can join and get our newsletter that brings you free tips and new tips every single day. Also want to thank everybody for attending today’s webcast and just ask when you exit the webinar you take a co-just take a few minutes to fill out the survey, the feedback helps with both today’s webcast and future events. Once again, thanks for attending and I hope you-

Topics :

Products :

Dr. Dallas Snider

Dr. Dallas Snider is an Assistant Professor in the Computer Science Department at the University of West Florida. He received his Ph.D. in Integrated Computing and M.S. in Instrumental Sciences from the University of Arkansas at Little Rock. Dr. Snider’s teaching and research interests include data mining, data warehousing, information visualization, and software development.

Try SQL Diagnostic Manager for SQL Server FREE for 14 days

SQL Diagnostic Manager Repository dashboard

24X7 SQL performance monitoring, alerting and diagnostics

  • performance for physical, virtual, and cloud environments.
  • Monitor queries and query plans to see the causes of blocks and deadlocks.
  • Monitor application transactions with SQL Workload Analysis add-on.
  • View expert recommendations from SQL Doctor to optimize performance.
  • Alert predictively with settings to avoid false alerts.
  • View summary of top issues and alerts with the web console add-on.

Start for Free