0 calories syntactic sugar for your sp_executesql statements

Today I'll share a helper procedure, which doesn't add much to your code, but some syntactic sugar and readability.

If you actively use integrations with other systems through linked servers, you inevitable ran into a limitation of openquery() statement, as it can only accept static strings as queries to an external data source. A workaround is to build a string containing full statement, and then use insert ... execute sp_executesql ... construct. Putting the whole conversation about efficiency and error handling a side, your code will become not much readable, as you need double twice single quotes - first to escape them in your MS SQL string assignment and second time as you building a parameter for openquery(), which should be a string in its own term.

As there's more than one kind of quotes available, I decided to add a simple procedure, that would save me from counting single quotes in my string and do this trick for me. So, here's the procedure code:

It properly escapes double quotes " with the right number of single quotes for embedding your string parameters, and does the same for grave accents ` as single quotes in the beginning and the end of the query string.

And here's it's usage, you can compare readability of the statements with and without using it for yourself:

The code should be self-explanatory, but take a note of the second optional @debug parameter to sp_executequotedsql. Once set to 1, it will print out resulting parameter for the standard sp_executesql procedure, so you can inspect that quoting is done right. Enjoy!


Back to school, helping your support team again

Hi Blog!
It's been 3 months since the last post, so here's more tips and tricks using MS SQL with DBAmp to make your job done as a Salesforce data guru.

Today I'm going to show a short piece of T-SQL code that illustrates three helpful techniques, particularly useful if you're dealing with ad-hoc user's request for dashboard data or to provide some data in their inbox on a regular basis.

Here's the gist:

Now let's discuss what and how it does. There are 3 nested data queries in this gist.

  1. The inner data query uses SOQL and executed on the Salesforce side. We have it very specific both in terms of time sequence filtering and additional filters needed. By the way, don't get puzzled by origin = 'GC' condition, you should substitute it for something that makes sense to your organization. Also notice how we using date = LAST_N_DAYS:10 construct to return only needed date interval. All in all, the query is selective, only columns needed in the following calculations travel over the network.
  2. Data enrichment is done on MS SQL side. We calculate our business metric(s), time to resolution in hours for this particular case, on the SQL side instead of defining additional fields in Salesforce. After all, this maybe only a temporary need. I'm utilizing isnull() function to use the current timestamp for calculating the metric for still open issues. Last point here, I want to provide decimals for time expressed in hours, thus I first multiply the TTR by 1.0 to avoid decimal part truncation as the SQL round to integers otherwise.
  3. The last step is to aggregate the calculated values and format the results. Again, nothing particular complicated here, just getting standard aggregates and make them decently formatted. Check out Format function in case you want to dive deeper.

And the result will look like this:

To sum it up,

  • use SOQL selective queries and return only needed data over the network;
  • calculate your business metrics on SQL using rich functions;
  • get aggregates and formatting as it fits your customers.

That's it for today!


Automating automation

If your shop is big enough, you probably have many automation jobs and may easily have tens of different job steps, that you schedule to run, say, nightly. How do you check the execution status? Here's a short gist, that would take the execution results of all the steps, check if there were any errors and then send an email to all needed recipients listing all the failed steps.

To improve things further, you can split your steps into two or more jobs, based on their importance, or include more information in the notification email, like links to specific urls, that are relevant to your task at hand.


Manually syncing data from an external data source

In our environment, we have several linked servers defined for our MS SQL central DB. One or two times a day we need to pull data from these external data sources and store it in a local table for further processing. All records have timestamps, but we cannot rely on time being in sync between the MS SQL and linked servers. Moreover, some records can be recorded at a later time, yet having earlier timestamps.
So, here what we did to come up with a workable solution in these constraints. First, we use the current time offset back as a parameter to call to a linked server. The offset is sufficient enough to guarantee we will get all the records we did not see before. Second, we'll use EXCEPT to skip the records we already have in the local table. Here's the gist:

Another point worth to mention, yes, you can use insert ... Exec sp_executesql @sql_string to insert results of stored procedure execution into a table.


From UNIX timestamps to SQL datetime

Ever struggled to recall what's the beginning of the UNIX epoch? I certainly did, so here's a function FromUnixTime() to conveniently convert UNIX timestamps to datetime values. The usage is straight forward:


DBAmp Cookbook updates

A few helper functions have been added to the DBAmp Cookbook. Here's why you need them and how you can use them in your code.
  1. fn_GetSFDate() and fn_GetSFShortDate(). These functions would allow to built SOQL statements that can be used with openquery, but manipulate with dates in SQL format. Consider this example:
  2. fn_GetAllReports() – an example of local function. This function returns all the employees reporting – both directly and indirectly – to the provided manager. The function uses common table expressions in MS SQL. In case you are not familiar with this concept, it allows you to define recursive select statements, which is a perfect fit for traversing  organizational hierarchy. As it's executed locally, to use this function you need to replicate User object to the local DB. 
  3. SalesforceId18() is a helper function that safely returns 18 digit Salesforce IDs even if a 15 digit ones provided as an argument. Unfortunately, I couldn't find the original implementation of this transformation, but it ported from a SF formula implementation. On SF side you don't need to invent a wheel anymore, as formula function CASESAFEID(id) will provide conversion. 


When you cannot build a report in Salesforce: supporting your customer support team

Salesforce.com is a powerful platform. The power of it comes from careful selection of features made available in the multi-tenant environment. As with any engineering decision for any technical system, there's a list of trade-offs that are made to ensure the best performance possible in the most common scenarios. But what if you need something, which is just slightly different from these 'most common scenarios'?

There are no arbitrary joins between the entities in SFDC and for a good reason: you don't want to build all indexes possible for all of your tenants. While it's fine in 'most common cases' during normal user interactions with entities in UI, it may cause major headaches when your users ask for reports on their very own data.

I have one stunning example of such situation. Imaging, the director of your customer support team comes to you and asks for a weekly report showing open-vs-closed cases side-by-side. Something like this:
Should be easy to do, right?

Your first thought would be to use a joined report to provide, and you're right, a joined report would definitely do here! But how do you join it? You will be reporting the same entity – Cases – for both parts of the report, but you have to group on different fields in these to parts – Date/Time Created and Date/Time Closed. How would you group on two separate fields at the same time? Well, you can try a formula field, which will show the created date if the case is still open or the closed date if it's resolved. Unfortunately, this will not work. Consider a support case, which was opened on Tuesday last week and resolved next day on Wednesday. This case should be accounted for in both columns for the last week, for both opened and closed cases. Any other options?

In our environment, we ended up employing DBAmp for this job. Essentially, we calculate separately the number of opened and the number of closed cases for each week, then upload these numbers to an auxiliary entity in SFDC, which has three fields: reporting week, number of opened cases and number of closed cases. Then the resulting joined report is built based on that entity and, since the date field is the same for both counters, you don't have a problem grouping on the reporting week field.

Check out the DBAmp Cookbook repository for this and other recipes.


From Sourceforge to GitHub

As I recently discovered to much of my surprise, there's a decent number of downloads from the old repository at sourceforge.net. As I was the last contributor to the project, I've decided to give it a new life and move to GitHub, so here it is:

Don't get confused by the Python label on the project. It's a xUnit-like framework implemented entirely on T-SQL.
I'll cover its features in a few posts here.

Blog 2.0

It's being quiet here for about 6 years, many things happened. I'm reanimating this blog to share my accidental thoughts and findings in the curious world IT, focusing on Enterprise Architecture, AI, Blockchain, Salesforce and everything around.
Stay tuned!