I’m using this a lot with mssql and in general it works great. But I’ve been noticing that it can affect other queries using the same database… even though I’m doing what I think is a pretty simple query, it seems like it is holding onto the query and perhaps even escalating the locks even though it is a select.
Is this what it going on in the automise code? My iterator might only return say 400 rows and the query is pretty simple… so does it hold onto this query for the life of the iterations?
I have alerts which tell me when a query is taking a long time on a database server and automise scripts frequently come up as long running queries even though the query is as I’ve mentioned very straightforward.
It would appear that we are locking in read only mode on the query. I will have to dig a bit deeper tomorrow to understand why this is required/being done.
What is the affect your seeing on your other queries? Are they being blocked from running while this query is active?
That would be awesome thanks.
Yes today I definitely had a case of that, and I’ve been worried about that for some time as I have an automise script that regularly checks for long running queries on our 3 mssql database servers, but these days more often than not it comes up with other automise queries
This is the kind of query I am using to test long running queries
d.Text AS [SQL_statement],
DB_NAME(database_id) AS DatabaseName,
a.total_elapsed_time / 1000 as Total_Elapsed_Time_Seconds
INNER JOIN sys.dm_exec_connections b ON a.session_id = b.session_id
INNER JOIN sys.dm_exec_sessions c ON c.session_id = a.session_id
CROSS APPLY sys.dm_exec_sql_text(sql_handle) d
(a.session_id > 50) AND
(a.session_id <> @@SPID) AND
(command IN (‘BACKUP DATABASE’,‘DBCC’,‘DBCC TABLE CHECK’,‘UPDATE STATISTIC’)) AND
(a.total_elapsed_time > (10 * 60 * 1000))
(command NOT IN (‘BACKUP DATABASE’,‘DBCC’,‘DBCC TABLE CHECK’,‘UPDATE STATISTIC’)) AND
(a.total_elapsed_time > (1 * 60 * 1000))
And I’m getting these kind of results… so 1020 or so seconds might be the time the automise script is running for in this example, but the query itself is open for all of that time (well according to the above system info query). Presumably I can’t update that table during that time? I don’t do it very often… usually it is inserts, so I guess it is something that could happen more often in other situations. It sounds like something we need to avoid anyway
----------------------------------------- Database Server 1 -----------------------------------------------
Code : 1
Description : Database Server 1
DataSource : enfinit-srvdb1
SQL_statement : SELECT
b.[Code] AS [ClientCode],
FROM [ClientDatabase] AS a
INNER JOIN [Client] AS b ON a.[ClientID] = b.[ID] INNER JOIN [DatabaseServer] AS c ON a.[DatabaseServerID] = c.[ID]
(a.[Status] = ‘Registered’) AND
(a.[Active] = 1) AND
(b.[Code] NOT IN (‘DELETE’,‘DEMO’,‘DEV’,‘REVIEW’))
login_time : 12/02/2015 8:00:03 p.m.
host_name : ENFINIT-SRVST1
program_name : Automise 4 Console
session_id : 51
client_net_address : 192.168.32.12
status : suspended
command : SELECT
DatabaseName : TimeAdmin
Total_Elapsed_Time_Seconds : 1020
Is it a possibility to cache the result set into a memory table of sorts, close the sql query as soon as possible, and then iterate off of that?
I have had a look at our ADO iterator action code and found that its holding the dataset open for the entire life of the iterator. We do attempt to be as “nice” as possible. Making to open in read only mode, and taking a client side cursor. There is very little clear information as to what this does to a MSSQL server. Clearly it seems to be locking the table for writes, which I don’t understand (we are read only and client side cursors).
The best solution here is to make the query as short lived as possible as you suggest. Again as you suggest storing this in memory would be the way to go. Currently this would require a fairly large rewrite of how the action works, and great care with handling the memory required to perform the action. For this reason I have logged this as a feature request.
In the meantime depending how many values your getting you could store them into project variables and iterate over those instead. This would shorten the lifetime of the ADO iterator to just the time require to perform the copy into local variables.
Thanks for that. I have a couple of dozen scripts which I don’t really want to change too much and go through a settling down process again. I looked for an easy way to stick those resulting variables into some other structure and iterate off that, but it looks like it would be more than a line or two per script which might have been manageable.
I might just wait for a while and hope it gets onto the TODO list. I definitely think it might benefit from at least some improvement just because they are likely to be used in long running scripts and the risk of blocking updates during the length of the iterations is a worry.
I don’t completely understand the lock escalation stuff either when you’re expecting to just do a read, although I know it is possible. I wonder is the client cursor type you are using is bi-directional or something and it wants to keep locks in place because it is ensuring the same read on the off chance you go back to it or something? (ok just a guess). Where as if it were a uni-directional cursor (fire hose?), it might be a lot more friendly?
Assuming I am vaguely on track, then even that would be an improvement even if the cursor was held open for a long period of time.
A workaround I might try in the meantime is I could see if the ADO query allows me to put that query into a temp table and select off of that. At least it wouldn’t be locking the main tables.
Yeah this is the issue I am seeing…
1. SQL Server will not release the locks held by the query unless Acknowledgement is received from Client. Which might cause blocking.
I have a feature build available that allows for the running of the ADO iterator in memory. Please let me know if this reduces the locking times your seeing from FinalBuilder.
I have a similar issue… I just want to check your opinion before I take action.
I’m still on 18.104.22.168.
When I loop through a dataset with a normal database connection string it is fine, however if I change the connection string to access the database through our F5 local traffic management (because I’d like to have a way in future of having mirrored databases and a pool… and it’s still ending up going to the same DB server in a round about what), after about 5 iterations for one of my scripts (and a different number for others), the query will fail with a ‘general networking error’ - as if the connection was dropped.
Before I do anything else, I’d just like to know your opinion in case it is something more serious with my F5 configuration.
1. Would this error have been possible with this version of automise (that is now quite old)? Has there been any ADO fixes since that might have helped here?
2. I can see on a trial copy of the latest version 5, that there is some in memory feature which on the face of it will probably solve my problems. However I just want to rule anything else out first.
Jason has moved on, so I’ll answer this.
1) Apart from the in memory option that Jason added, there have been a few other minor fixes to the ADO Iterator action and one fix which was to deal a random access violation (crash).
2) Impossible to know. I can only suggest that you test it out. Looking at your account, you do actually have AT5 licenses available.
Thanks for all of that. I’ll give AT5 a go shortly.