Truth About Working with Tech Recruiting Agencies

The job of tech hiring has been almost completely outsourced and working with recruiting agencies has become a necessity. Recruiting agencies can be either a bridge or another hurdle between you and the job you want. If your target market or target industry isn’t dominated by recruiters then you should probably avoid them and rely on more direct methods.

Recruiting agencies can help to get you in the front door, prep you for the interview and help with salary negotiations. This is supposedly the main benefit for you when working with an agency, but not all perform it as well. I personally worked with an agency where I had a 5 minute initial screening call with the recruiter and another 5 minute screening call with the Account Manager and then never heard from them again. Afterwards, I had 3 successive interviews directly with the hiring company. I did all the work of selling myself and negotiated my own salary and in the end they got paid a big commission – for basically passing along my resume.

Bottom line: It is up to you as to how you navigate the world of recruiters. Knowledge is power!

Recruiting agencies are in the sales business. There are many players in the game because there is good money in high tech recruiting. This can sometimes create undesired conflicts and put candidates in uncomfortable situations. Here’s what I have learned, from my own experiences.

How Recruiting Agencies Work

  1. When a job is posted on the internet every recruiter in the country (and outside the country) will scour the job sites and resume databases for potential candidates, blast off emails and phone calls and fight for the opportunity to match a candidate to a job posting (hopefully before someone else beats them to it) to make a sale. Sometimes they cast a wide net and many positions are not even close matches, but they are often playing the numbers game with unqualified leads.
  2. Think before you share. Avoid putting your home address on your resume or your primary email and phone. Use a dedicated email and phone (e.g. google voice) for purpose of job hunting. Once you post your resume online, expect tons of recruitment spam for years to come. Even though I only posted my resume in 3 major career sites, it eventually ended up in outside databases that generate recruiter spam forever.
  3. Every agency is different based on the individual recruiter or company culture they promote. Many are small outfits from just one to a few people, some are medium sized with a local staff and some have offices all across the country or world. Before submitting your resume, visit their websites to learn more about them. Many email blasts appear to be only remote lead finders probably working on referral fees which are really just an extra hurdle between you and a job offer. In addition they may not be authorized to submit your resume to the hiring company which can cause more problems than its worth. Ignore them and stick to agencies you have researched and appear trustworthy and that can provide value to your job search.
  4. Bigger is not always better. My experience has been the bigger and more volume a recruiting agency does, the less personal attention they pay to the details and the relationships and, in my opinion, makes them less effective. Many larger agencies are spread across the country and I found it more difficult dealing with multiple people spread across different time zones for a local position. I prefer to target smaller or local agencies that specialize in my target area and are more likely to build better relationships with hiring companies. However, a bigger company has a local office and builds local relationships with hiring companies can work just as well.
  5. Facts are fuzzy. Unfortunately until you eventually interview directly with the hiring company, that may be the only time you REALLY find out the true details about the job. When it comes to the job listing, agencies often get the details wrong. Even the hiring companies put out poorly written job descriptions and may add many nice-to-have items that may never be required. Don’t be afraid to apply for a position that you have some skills for but is asking for too many things – many may end up being just nice-to-haves anyway.
  6. Many agencies work in teams with the recruiters scouring the internet for candidates and the Account Manager that builds the relationships with companies and hiring managers. Your goal is to get an interview directly with the hiring company, but first you must navigate your way through a recruiting agency, with one or more mini-interviews. The recruiter is usually the first person that will contact you. They will attempt to pre-assess you and your skills before deciding to pass your resume along to the Account Manager and then to the hiring company manager.
  7. Agencies are in the sales business. It is the agency’s job to extract your best qualities and sell them to the hiring company. They will represent you before and after the interview and may even help you negotiate a higher salary then you would have on your own. This is supposedly the main benefit to working with an agency, but not all perform these services equally well. They may oversell you to close a deal or they may pass on you after deciding you are not a good match. It is really up to you to sell yourself to them.

Consulting Companies
If the hiring company allows contract or contract-to-hire positions, consulting companies will also be recruiting for the job position. This changes the whole dynamic of your career path, so give serious consideration if this is the route you want to pursue before applying with them. Consulting companies may allow the hiring company to hire you after certain number of months (that they deem profitable to do so) and some want to retain you in their employee talent pool for future contract engagements. You will need to research the consulting company as well as the hiring company and give consideration to such things as when/how you are paid (even between assignments), how expense reimbursements are handled and what benefits are offered and when/how you qualify for them.

Truth about Recruiting Agencies

1) You Can’t Pick Your Recruiter
A good recruiting agency can help you tremendously, but you can’t just search for the best one and sign up with them. You most likely have to work with which ever one posted the job position that you wish to apply. And if you apply for a dozen jobs that may be a dozen different agencies. The competition is heavy and not every agency can cover every position available. You will often see the same position being advertised by several different agencies, so if this is the case, you may have some choice on which one you choose to apply through.

Many hiring companies have pre-arranged agreements with some agencies and can negotiate exclusive relationships. These agreements can be formal or informal, but may define who is authorized to present candidates to the hiring company. If you don’t go through an authorized channel, your resume may go into a black hole and never be seen. Ensure the agency has a relationship with the hiring company or call the hiring company HR department to verify (if needed).

2) They Don’t Work for You
Recruiting agencies will help you and appear to be on your side, but remember recruiting agencies are working for their client’s best interest, not yours. The company that pays them after you get hired is their client. This is highly competitive business where they want to present the best candidates to their clients as fast as possible – before someone else fills the position and they get paid nothing for their efforts. They are not career counselors and they simply don’t have the time to find a job that’s perfect for you. However, they also may be your only path between you and the company you want to work for.

3) Conflicts of Interest
Remember, they are working for the hiring company and not you. They may have a conflict of interest depending upon if you are the only candidate they are representing to a company or if you are one of many they are representing for the same position. If you have highly desirable skills it works in your favor, but if you are still developing or updating your skills this can be another challenge.

They do want to get paid and may employ methods to give you/them an edge. On one hand they may provide you with inside knowledge about the company or hiring manager that you would never have known applying directly. On the other hand, I have had more than one agency give me a list of questions the hiring manager used to screen past candidates and one even gave me the expected answers to help pass the interview. Yeah these kind of tactics (and worse) really do happen.

Don’t apply for the same position through more than one agency. Ethically an agency should never submit you as a candidate without first asking your permission, but it does happen. Most recruiting contracts stipulate that the employer must pay the first recruiter who submitted the candidate. During the hiring process, if a conflict between two agencies, both claiming representation, develops (i.e. both want the commission), it may cost you the job offer – if the hiring company is unwilling or unable to resolve the conflict. Same situation applies if you submit yourself directly and through a recruiter and you end up with a job offer directly by the company and the recruiting agency finds out. This actually happens more often than you might think.

4) They Have Little Time for You
Agencies deal with numerous openings and prospects and have limited time to devote to every canidate. It’s a little like speed dating. If a recruiter likes your resume and they call you and within minutes want to determine if you are a qualified match or move on. If the recruiter decides they want to take the next step they will pass your resume along to the Account Manager and eventually to the hiring company manager. The recruiter is usually someone that does all the busy work to drive in leads and pre-qualify candidates. Realize that they may have very limited knowledge about technology or the actual job position. Keep technology conversations on a high level and speak in general terms.

You need to be prepared to quickly and effectively communicate everything they need to know about you to feel confident in passing your resume along so you can achieve the goal of getting the all-important job interview. If you fail and the agency decides you’re just not a match, you may have no choice but to move on. Learn all you can from this experience to better prepare yourself for the next one.

5) They Don’t Understand Technology
Most have minimal understanding of technology and on the first phone call they will be assessing your technical skills. They will match keywords on your resume to keywords on the job posting and see if you have X years of experience with skill Y to see how you stack up against other candidates. They will not understand the important nuances of your experience that may make you a better or worse candidate than someone else. They are mostly concerned with the highlights. Yes, this is a terribly superficial way to evaluate a candidate, but it’s a necessary step you have to get through before you can get to the all-important interview with the hiring company.

However, this is an opportunity for you to use good people skills and show you are likeable and set yourself apart from other candidates.

6) This is a Commission Based Sales Business
The tech recruiting field is lucrative business, paying 20-40% commissions on your annual salary. For a $100K salary that could be a $40K payday, so they do want their candidates to get hired so they get paid.

The recruiting business is a numbers game. Number of emails sent result in number of screenings which result in number of interviews which result in job offers which result in a final sale – candidate hired. Sometimes, even if you are a marginal match for the position or the position isn’t the best match for you, they may tend to oversell you to the client and push you to apply anyway – to get interviews. Interviews are how they get their foot in the front door and build relationships with hiring company. Recruiting agencies promote that they can find the best candidate to fill the companies open position. However, in reality, if a hiring company is sent three people to interview, they may just pick the “best one” to fill a position instead of interviewing dozens over many weeks. It’s a numbers game and they can’t get paid if they don’t send candidates on interviews.

However, again you can use this as an opportunity. Even if the job is not the best fit, you can use the opportunity to hone and practice your interviewing skills. And you never know you may be pleasantly surprised and find out it’s a great fit after all.

Summary

You can’t always choose the agency you work with, but you can choose what you do with the situation you are given. Recruiting agencies also face many challenges working with various hiring companies and candidates, so I’ll concede this business can be difficult on both sides of the recruiting process. Do your homework before applying with an agency and don’t do anything or make any agreements that make you uncomfortable.

 

References

Here are some additional viewpoints from others.

Playing the Third-Party Recruiter Game
http://dataeducation.com/playing-the-third-party-recruiter-game-t-sql-tuesday-093/

What You Need to Know About How Recruiters Get Paid (And How It May Affect Your Job Hunt)
http://chameleonresumes.com/need-know-how-recruiters-get-paid-may-affect-job-hunt/

Why Recruiters Are Bad For Your Career
https://www.brandonsavage.net/why-recruiters-are-bad-for-your-career/

How a Tech Recruiter Can Help You Get Hired in Your Dream Job
https://skillcrush.com/2016/04/05/how-a-tech-recruiter-can-help-you-get-hired-in-tech/

11 Tips for Working With IT Recruiters
https://www.cio.com/article/2377612/careers-staffing/11-tips-for-working-with-it-recruiters.html

Posted in IT Career | Comments Off on Truth About Working with Tech Recruiting Agencies

SSMS Best Practices and the Dark Theme

Our tools can make us more efficient and productive – if we take the time to learn their secrets. I spend a lot of time in SQL Server Management Studio (SSMS) and have learned valuable tips and features along the way from many others (links provided).

SSMS Settings

There are my favorite tweaks to improve your working environment.

  • Register all your servers into logical groups and assign different server colors in SSMS
  • Query tab names are too long and unreadable, unless you configure SSMS tabs to show only file names
  • Don’t box yourself in. Auto-hide Registered Servers, Object Explorer and Properties panes and toggle results pane using Ctrl-R
  • Clean up your toolbars. Remove buttons you never use and free up screen real-estate
  • Turn on line numbers in the left margin
  • Setup keyboard shortcuts, Tools > Options > Keyboard

Shortcuts

Shortcuts help us get things done quicker, that’s a good thing. There are tons of shortcuts and you probably won’t learn them all or keep a list by your keyboard. A good rule of thumb is, whenever you find yourself frequently repeating a task and stopping to fiddle with the GUI, you might want to learn an existing keyboard shortcut or create your own. Browsing the menus and toolbars will reveal many shortcut alternatives.

  • Toggle results pane with Ctrl-R and free up your coding space between runs
  • Execute partial script: Ctrl + Shift + Up/Down then F5 will highlight and execute everything upward or downward
  • Also placing a temporary RETURN command in your script acts as an exit statement preventing anything from accidently executing after it (caution if you use GO statements the RETURN will only exit the current GO batch)
  • Comment code block: Ctrl-K Ctrl-C, Uncomment code block Ctrl-K Ctrl-U
  • Open new query window: Ctrl-N
  • Indenting code: highlight lines Tab to increase indent, Shift + Tab to decrease indent
  • Vertical block editing: Shift + Alt + Up/Down Arrow Key or Alt + Up/Down Mouse Drag (occasionally very useful, see it to believe it)
  • Switching servers is quicker when you use the toolbar button to change connection (or create your own shortcut)
  • Plus more shortcuts than you can handle

If you’re using RedGate SQL Prompt, then you have many additional awesome productivity enhancements. There are numerous features, but here are some I use frequently.

  • Auto-completion and quick peek features are way better than native SSMS and proven huge time savers
  • F2 to rename any alias and variable for every instance within your script (also smart rename for refactoring objects)
  • Ctrl-F12 on a table, sproc, or view in your script and it will automatically locate and select it in object explorer pane (no more drilling and scrolling and drilling and scrolling)
  • Ctrl-K Ctrl-Y Reformat script according to your formatting preferences (highly configurable and powerful)
  • Tab History allows you to find every script you’ve written and search them by keyword – even if you never saved them ( SSMS now provides crash protection, but Tab History has saved me time and prevented unnecessary re-work)
  • Server tab color coding is better than that provided by SSMS
  • Snippet Manager and 12 SQL Prompt Tips need I say more?

The Dark Theme

Many SQL users are still using the default bright white background, but many have joined the dark side. There is much science and debate on light vs dark backgrounds and numerous factors play into the equation. Whatever the science, many developers using a dark theme claim they can sit in front of a monitor for longer periods with more comfort and less eye strain. So it’s worth trying it out and deciding for yourself. When I first saw a dark theme, my reaction was negative, but I decided to try it out for one week – years later, I have never reverted back. Even Yoda knows, “the dark side is quicker, easier and more seductive and forever will it dominate your destiny.”

I first became familiar with the default Visual Studio dark theme which, I think, got the contrast balance more right. I couldn’t understand why this theme wasn’t made available in SSMS as well, so I eventually designed SQL Grinder Dark Theme based on it (see screen shot). Here is my environment complete with Hack font, single toolbar row, color coded query tabs (that only display the file names), and  auto-collapsed sidebar windows.

The setting file contains all SSMS settings tweaks along with dark color scheme. You can choose to just import colors or everything, just make an export of your current settings first. From over 150 text editor elements, I only focused on the T-SQL subset, which met my needs. If you see something you don’t like, its easy to change, but finding the correct display item can sometimes be a challenge. This SSMS 2016 setting file should also work with SSMS 2012 and 2014 versions that use XML settings files.

ssms-sampleHidden Dark Theme

Microsoft has continued to leave the dark mode in SSMS unfinished through many versions even though many people have requested it. However, they were nice enough to put it in anyway and just disable it. You will have to comment out the configuration lines that cause it to remain hidden from you. For SSMS 2016 and SSMS version 17 see How to Enable Dark Theme for SQL Server Management Studio. UPDATE: For SSMS version 18 see bottom of this article.

Contrast

Even with a dark theme, there are many to choose from and personal preferences is subjective. You can download themes made for Visual Studio which may not work as well for SSMS or make your own the hard way. The problem, IMHO, with most themes I tried is not maintaining a lower contrast. Many use too many bright text colors with high contrast and use pure white instead of light gray. Some use a faded black or bluish backgrounds to compensate, which isn’t necessary if you don’t go crazy with the text colors.

A dark theme will work better when the surrounding environment doesn’t provide a high contrast. Room lighting and screen brightness can also be a factor in your viewing comfort, but can also be subjective. Some people can work on a laptop with maximum brightness sitting in a dark room. That’s high contrast and hurts my eyes. My preferred daytime environment is a natural lit workspace (no lights) or nighttime with maybe some soft background light – always with a low screen brightness.

UPDATE: Windows 10 now offers option to auto enable night mode that warms the colors on your screen in the evening hours.

Even though SSMS has been using the Visual Studio shell since 2012 to support themes, it only applies to the query window and not the rest of UI like it does in Visual Studio. The side bar windows, results pane and rest of the UI don’t support the theme colors giving an inconsistent experience. When using a dark theme, I minimize the white contrast of the rest of the UI by auto-hiding side panes and use Ctrl-R to toggle the results pane.

Hack Font

Your font also plays an important role in screen readability. It is recommended to stick with a fixed width font for coding and there are only a handful to choose from like the default Consolas font. However, I have switched to the Hack font which has been specifically designed for coding environments. It is easy to install and set it as the default font for your text editor in SSMS.

ssms-options

Save Your Settings!

After investing time getting things just the way you like them, export all your settings and save them in your favorite cloud storage, so you can quickly move from one environment to the next and have all your familiar settings back in seconds. When importing you can choose only colors and fonts if you wish to leave the rest of your environment alone. I also export my registered server settings and since I use RedGate SQL Prompt and Multiscript, I export those settings as well and store them together with the Hack font files. This saves me a ton of time when switching to a new computer.

Feel free to leave comments on your favorite settings and color preferences.

 

UPDATE: SSMS Version 18

The hidden dark theme still remains hidden with this release and Microsoft has also moved things around.
SSMS Version 18’s new path to the ssms.pkgundef file:
C:\Program Files (x86)\Microsoft SQL Server Management Studio 18\Common7\IDE

Now you’ll only have to comment out one single line at the bottom of the file (must be edited as administrator)

// Remove Dark theme
// [$RootKey$\Themes\{1ded0138-47ce-435e-84ef-9ec1f439b749}]

Posted in SSMS | Tagged | 7 Comments

Mystery of the Changing CASE WHEN Output and Data Type Precedence

I was working on a T-SQL query that had some very particular output requirements for decimal places. The problem seemed fairly easy, but when I tried to form a CASE WHEN statement that produced slightly different decimal lengths for each condition, the results were inconsistent. The reason wasn’t immediately apparent but involves SQL Data Type Precedence. I thought this would be a good blog post because it is something that anyone can unwittingly stumble into without realizing what the SQL query processor is really doing. It is also hard to discover what the solution to a problem is when you don’t know what you are looking for?

REQUIREMENTS
If number < 100 always display 3 decimals
If number >=100 and < 250 always display 2 decimals
If number > 250 always display 1 decimal

And decimals are truncated (not rounded) at the desired decimal place and must display desired number of places even if it is zero. I know this is something that should be handled by the presentation layer and not a SQL query, but that is a whole other blog post we won’t get into here.

EXAMPLE:
317.0000 should display as 317.0
155.1023 should display as 155.10
90.1203 should display as 90.120

This seemed easy enough and was accomplished using a CTE to perform some initial math and then perform a CASE WHEN statement to format the numbers using a different decimal cast DECIMAL(12,3) or DECIMAL(12,2) or DECIMAL(12,1) to return the appropriate number of decimal positions. However, the CASE statement would always return 3 decimals for every output even though the WHEN statement logic was branching correctly.

ALTERNATIVE?
The data warehouse person I was working on this requirement with provided a solution to this problem by wrapping each THEN statement with extremely long SUBSTRING and CHARINDEX functions to dissect both sides of the number and reconstruct it with strings and slapping a decimal in there. It did work, but needless to say it was very ugly and since the final code would return 5 million rows I knew their had to be a better solution.

TEST CASE
I prepared a unit test to discover what was going on. What I found was that the exact CAST statement with the decimal CAST works fine when run by itself outside the CASE WHEN statement but the same CAST doesn’t seem to work within the CASE WHEN statement.

[code language=”sql” light=”true”]
declare @credits decimal(18,5);
set @credits = 317.004;
select
cast(floor(@credits *1000) /1000 as decimal(18,3)) as ‘3’,
cast(floor(@credits *100) /100 as decimal(18,2)) as ‘2’,
cast(floor(@credits *10) /10 as decimal(18,1)) as ‘1’,
case
when @credits < 100 then cast(floor(@credits * 1000) / 1000 as decimal(18,3))
when @credits >= 100 and @credits < 250 then cast(floor(@credits * 100) / 100 as decimal(18,2))
when @credits >= 250 then cast(floor(@credits * 10) / 10 as decimal(18,1))
end as ‘CASE’,
case
when @credits < 100 then ‘<100’
when @credits >= 100 and @credits < 250 then ‘>=100 and < 250’
when @credits >= 250 then ‘>=250’
end as condition
[/code]

Output
Case-When

MYSTERY REVEALED
The CASE column above should match the 1 decimal column output, but instead it has 3 decimals. Why did this happen? Why do the CAST statements alone produce the correct output but when run inside the CASE WHEN statement, they produce the wrong output. I started to think about how the query processor would look at this query. When it starts performing the CASE statement it says, I need to allocate some storage for the output from this CASE statement but I see 3 different types of decimal output, so I will pick the largest one to satisfy all possible outcomes.

Even though I wasn’t doing anything drastic like having the CASE statement return either a string or int, I believe I was still running into a problem of SQL Data Type Precedence. Which states: “When an operator combines two expressions of different data types, the rules for data type precedence specify that the data type with the lower precedence is converted to the data type with the higher precedence. If the conversion is not a supported implicit conversion, an error is returned. When both operand expressions have the same data type, the result of the operation has that data type.” Here are some more examples of this in action.

SOLUTION
So, to solve this problem, I simply let the CASE statement know all output will be VARCHAR(20) and this allowed the decimal CAST statements to return their expected results. In the example below the credits already had the math performed in the CTE and this query handles the formatting requirements. Needless to say, this solution was a lot better than performing cumbersome string manipulations to format the numbers.

[code language=”sql” light=”true”]

case
when credits < 100 then cast(cast(credits as DECIMAL(12,3)) as VARCHAR (20))
when credits >= 100 and credits < 250 then cast(cast(credits as DECIMAL(12,2)) as VARCHAR(20))
when credits >=250 then cast(cast(credits as DECIMAL(12,1)) as VARCHAR(20))
end

[/code]

Posted in T-SQL | Comments Off on Mystery of the Changing CASE WHEN Output and Data Type Precedence

Several Habits of a Highly Effective Troubleshooter

As a follow up to my article Anatomy of a Successful Troubleshooter, I began thinking about the habits that I have found effective when troubleshooting. I was trying to coach a teammate with troubleshooting and put together this catalog of what I have found valuable that could qualify as effective troubleshooting habits.

Problem Management and Root Cause Analysis

Problem Management and Root Cause Analysis are vast subjects, but I think it can be boiled down to: performing troubleshooting more effectively and creating measurable results.

Here are some key takeaways from this topic:

  • Reactive Problem Management is focused on finding an immediate work-around
  • Proactive Problem Management is focused more on finding the root cause and prevention
  • If we can properly identify a problem and understand it then we can probably do something to prevent it from occurring again
  • Often the first problem identified is not the root cause and we must continue to explore and dig deeper (this is the fun part)

There is a lot written about these topics from a management perspective, which talks about various barriers and challenges. However, we don’t need management committees to tell us how to get results; we just need to continue to improve our own troubleshooting skills and use good judgement.

When particular problems arise, we know performing some Root Cause Analysis can produce valuable information. This is where we discover how it works, how to fix it, why it is the correct fix and most importantly prevent it from reoccurring. For this investment, the worst case is that we learn something new; best case we are able to produce measurable improvements.

Effective Troubleshooting Habits

Filter and Corroborate Advice you Find

During a google search, you will may be lead down many wrong paths and solutions may be either wrong or don’t apply to your situation.  Sometimes I find that an error that has over a dozen possible causes and a dozen proposed solutions. Read carefully and do not blindly implement them one-by-one until one works (see avoid brute force methods). You’ll be more efficient in resolving the problem (and also not risk introducing additional problems) when you take the time to filter and critique you’re reading. Before implementing a solution, keep reading until you discover enough about the problem and the various solutions to make an educated choice of action. Try to corroborate advice from more reputable sources.

We have to consider that the solution to the problem will be the result of connecting several different clues that can only be found when examining them together and not individually. In other words, if we look at one symptom and find a solution to that symptom we may not necessary have found the correct solution our specific problem. All aspects must be correlated and examined together to discover the eventual correct solution.

The internet is also notorious for giving bad advice and selling it as good advice or even a best practice. There are many reasons for this. The poster’s complete situation may be specific to their environment or undisclosed factors that may not apply to you situation.  Sometimes the solution applied to older OS or SQL versions that now is either incorrect or doesn’t apply today. Try to find sources relevant to your versions or configuration. Attempt to verify that the symptoms in their solution are the same symptoms you are observing and are coming from the same source problem. Look for posts where there is some common agreement and eliminate the fringe cases that probably don’t apply to the situation.

  • Be careful of being led down a wrong paths that won’t work
  • When too many possible causes, keep researching to eliminate ones that don’t apply
  • Corroborate advice from more reputable sources
  • Don’t risk introduction new problems by implementing solutions not understood
  • Attempt to verify the proposed solution matches the symptoms associated with your problem
  • Filter and eliminate fringe cases that don’t apply

Observe the Obvious

Sometimes the clues are right in front of your face and it just takes stepping back to see it.

EXAMPLE: One time I was troubleshooting an FTP problem with a vendor. I logged on to their FTP server and confirmed our process had uploaded the files, but they were having issues with their back-end processes that were picking up the files and claiming that we were not sending the files. I noticed in the same directory the files we uploaded on their server they had these large files labeled ftp_activity_log. I opened one and found their back-end software was logging API communication failures in their own logs that they never bothered to inspect.

Correlate the Clues

Sometimes it takes someone saying has anything else changed today? It helps discovering what else has been going on recently with your network and servers with other teams. But don’t use this just to point blame at another team without actually finding a clue and taking the time to actually investigate further. Armed with preliminary information you can begin to ask intelligent questions of other teams in a cooperative resolution focused manner.

EXAMPLE: One day an application that involved 4 SQL clusters all starting failing. The application lead spent half a day on the phone with vendor troubleshooting to no avail. Later in the day I was inquiring with the busy application lead. I didn’t get much information but the little I got, I started inspecting things myself and found in the application error logs the problems all started during lunch hour. I remember during lunch getting a call from our SAN engineers that were moving drives on a different cluster to the new SAN and had to recreate the MS DTC resources on the cluster. I went to the cluster and found the DTC service was not properly configured for remote access. When I fixed that everything started working with the original application again. The application lead and vendor were unable to make this connection because it didn’t look outside their scope.

Avoid Brute Force Methods without Prior Inspection

In an effort to just make problems go away and move on, don’t just fire off service restarts or server reboots without even looking at the problem. It is important to fix problems quickly, but collecting key data while the problem is occurring and testing your theories is also important to determining the root cause (especially for reoccurring problems). When you fail to collect key data beforehand you can sometimes erase evidence or be unable to examine the problem afterwards.  Sometimes mysterious problems are just solved with a simple service restart or reboot, but if the problem is reoccurring this is a primary opportunity to learn something new and rise above the average troubleshooter.

  • Don’t just fire off service restarts or server reboots until you have inspected the problem
  • Collect data beforehand to make it easier to duplicate or troubleshoot later

Get a Fresh Perspective

Sometimes you need to stop thinking about something to see it more clearly. When you spend too much time on a problem your efforts become less effective and have diminishing returns. Sometimes you are troubleshooting a particularly evasive problem over many hours. Get up take a break and do something different. Verbalizing the problem or drawing it on a white board helps and solutions become more obvious when explaining to someone else – then BAM – something you weren’t seeing before comes into the picture and you’re back on track. Don’t be embarrassed to be wrong with your assumptions, we all go through a similar process and have similar blocks. It helps having colleagues that can help each other when you get stuck.

A similar topic is brainstorming. When developing a new solution (or solving a problem) after spending many hours, I frequently find solutions come to me after taking a break and when I revisit them with a fresh perspective (an ah-ha moment that leads me in a new direction).

Document and Improve

After the troubleshooting is over and the fires are put out,  take time to review the situation. The best time to document something is right after you developed it or just fixed it when all the details are still fresh in your mind.

Documentation

After troubleshooting something, it will usually come up again in some fashion. I can’t remember every detail I troubleshooted months ago – heck sometimes s week ago. And remember, someone else might benefit from the knowledge you gained, so you can be a hero by sharing it. Create a Visio diagram, write a procedure document, write a wiki or blog article, etc. Share it with your peers and incorporate their input.

This extra step requires investing a little more time, but it also reinforces what you learned. When you start to document what you learned you gain a more complete understanding of the subject.

Identify Things that Could be Improved

During your troubleshooting you may discover systems or processes that need to be upgraded or improved. I want to make things more efficient which in turn makes mine and others jobs easier. This may mean putting in some automation task or suggesting a system or process improvement to management.

 

What habits have you found useful?

Posted in IT Career | Comments Off on Several Habits of a Highly Effective Troubleshooter

Anatomy of a Successful Troubleshooter

I recently found myself coaching a teammate with troubleshooting advice, but it seemed like no matter what I offered they were unable to improve their situation or correct bad habits. It seemed like it just wasn’t in their DNA. So, I started to wonder if they had the right stuff to become successful in this career. This made me curious if there were core traits or qualities that make someone a good or poor candidate for an IT career?

I have worked with many different IT organizations and over my career, I have seen IT staff start out and move up quickly and onto better careers in networking or programming or management in a short number of years. And I have seen IT staff become stagnate and seem to never advance.

I view troubleshooting like being a detective solving a whodunit mystery. It is a process of uncovering digital clues and attempting to connect the dots and find links to a plausible root cause. Tracking down and following up on these leads helps us arrive at an eventual resolution – cracking the case. I believe this is both an art and science.

What is the Right Stuff?

First of all, you have to be able to think logically and analytically. If someone doesn’t have this basic foundation to approaching and solving problems, it can become very difficult.

Over the years, I have observed two common qualities of successful IT professionals that are easy to spot: curiosity and passion (or at least strong interest).

If you don’t have passion for technology and what you are doing, it is much harder to stay interested or advance and earn more money. Have you ever noticed non-IT people that struggle with computers lack patience and therefore have low curiosity? They get frustrated easily and demand it just work (pounding fist on the keyboard).

Problem solvers are curious people and usually enjoy a challenging task. They are often found tinkering or experimenting to gain a better understanding. They want to know how it works and why it works. The secret to being a successful in IT is being able to figure things out – which seems directly driven by the quality of being curious and passionate.

Being Able to Figure Things Out

A study was performed about 30 years ago with beginner and expert Unix admins. It was performed again more recently and interestingly produced the exact same results.

A written test was given to both beginner admins and expert admins and the scores found that there was very little difference in the results between the two groups. This lead to the question (humorously), why do we pay so much more for expert admins when objectively they didn’t test any better than a beginner admin?

For the second test, they put the two groups in front of a computer and asked them to solve problems given to them. The results showed a dramatic difference. The experts were able to accomplish amazing things that beginners struggled with. This lead to the conclusion, that experts were clearly better at figuring things out.

I heard a real life recent story where a company fired all their top paid programmers and retained only all the junior programmers. Someone thought this was a brilliant financial decision to cut costs. It’s no wonder this company soon went out of business.

In my opinion, this classic experiment demonstrates that memorizing things (or passing a test) isn’t a good measure of skill. It has more to do with knowing where to look and skillfully connecting the dots and using good judgement in your execution. Skill only comes with wisdom and experience. But, once you develop these skills in one area, they do carry over and build upon new technologies you learn.

Every single job I have been hired at, I have been asked to learn and master new things that weren’t part of original job description and never screened for during the interview. It’s just the nature of the job in IT, we are continually asked to learn new technologies and troubleshoot things we have never done before. And, it’s no surprise, when we do we acquire new skills in the process. I think this key quality – ability to figure thing out – should be better identified during the interview process and given more weight. Instead hiring managers create  check off boxes of specific technologies or skills you already have against the job description (I could fill another whole article on the topic of better IT hiring practices).

Can I Make Things Better?

Another quality successful troubleshooters have is a desire to improve the systems with which they work. When you are troubleshooting are you looking for a quick fix to make the problem go away or do you look for an opportunity to learn?

Sure, sometimes we have to fix the problems quickly and don’t have time to look back. However, have you ever implemented a fix and didn’t really understand why it fixed it or understand if it was even the appropriate fix or if it may create new unforeseen problems?

It requires extra investment on our part to research an issue more deeply, but the extra time spent will pay dividends in the long run. This is where we discover more deeply about how systems work, why it is the correct fix and prevent it from reoccurring. And sometimes we discover a better way to do things or make our systems just work better. This is where we build our tool kit of best practices.

Sometimes an improvement can become a new project which needs to be presented to management. This creates more work – but that’s a good thing, right? If you don’t want to improve systems for your company and enhance your skills, then you are probably not reading this article either.

Benefits of making things better?

  • Create measurable improvement or efficiency of a process or routine
  • Learn something you didn’t know before (blog and share what you learn)
  • Expand your knowledgebase and skillset making you more valuable
  • Make recommendations and implement changes that save time and money
  • Become a more valuable asset to your company

Coincidently, these are also long-term benefits that help advance your own career.

 

What qualities do you think make IT professional more successful?
What qualities have been helpful in your career?

Posted in IT Career | Comments Off on Anatomy of a Successful Troubleshooter

Zero-Cost Highly Available Reporting Services within a Failover Cluster

This solution is for anyone that has a SQL failover cluster and wishes Reporting Service to be highly available and have failover capabilities. The solution I implemented seems the most natural and how Reporting Services should work right out of the box – but doesn’t. This solution makes Reporting Services highly available (HA) on both cluster nodes with failover capability with zero cost (no additional servers or licenses). Although, it is an uncomplicated procedure there was much learned from the experience along with some interesting load distribution options.

I have implemented this solution on both:
Windows 2008R2 Failover Cluster with SQL Server 2008R2
Windows 2012R2 Failover Cluster with SQL Server 2012

These were active/passive clusters, but no reason why it wouldn’t also work with others.

NOTE: This solution provides high availability but low scalability. As soon as your Reporting Services demands exceed the power of your cluster server/node, you should consider a traditional scale-out deployment with multiple servers using Network Load Balancing (NLB).

Problem

As you know, if you install Reporting Services on a cluster, whenever the cluster fails over to the second node, Reporting Services will be down.  When SSRS is down the reporting URL will not be accessible and any missed scheduled report deliveries may have to be reprocessed manually after everything was failed back to the primary node. This probably doesn’t happen often, but when it does it is a pain and it’s frustrating that Reporting Services doesn’t work within a cluster like we expect. This happens because as Microsoft states SSRS is not cluster aware component of SQL. When you setup a cluster with SSRS and then install the second node SSRS will not get installed and if you attempt to install and add SSRS manually on the second node the installer will not allow you.

I was migrating an older SQL 2005 cluster to a brand new SQL 2012 cluster and took the opportunity to research and test a more highly available solution. I discovered all usual posts stating SSRS is not cluster-aware and you can’t make it HA without deploying multiple servers in a scale-out deployment. But I didn’t want to deploy multiple Report Servers and greatly increase our costs. I just wanted Reporting Services to work in my cluster how we expect it should.

Options

Selecting an optimum Reporting Server solution involves weighing costs (for SQL license and servers) along with performance demands and scalability. I found various solutions available to make Reporting Services HA, each with its own pros and cons. Depending upon your needs one of these other solutions may be a better fit for your environment.

HIGH-COST WITH HIGH SCALABILITY
Since SQL 2008, Microsoft has recommended a scale-out deployment – installing Reporting Service instances on separate servers creating a web farm sharing a common Reporting Services database (on the cluster) and adding network load-balancing. This is a common Microsoft recommendation that allows great scalability but requires SQL Enterprise edition for each reporting server you add, which can get expensive.

MEDIUM-COST WITH MEDIUM SCALABILITY
Some people recommend installing a new instance of SQL just for Reporting Services on the second cluster node and join it to the primary node in a scale-out deployment, which also requires an additional SQL Enterprise edition license. Using any combination of SQL Standard edition servers will not allow you share a common reporting database and will present an error that scale-out deployment is not supported (this is why we pay for the expensive version).

I also discovered some creative, less travelled, solutions that provide lower costs. One solution involved dividing the workload between two additional SQL Standard edition Reporting Servers by running only scheduled report distribution on one server and interactive report generation on another (done by setting options in RSReportServer.config on each server). With this solution you can dedicate the resources of two individual servers sized for your workload of your Reporting Services tasks. Another solution involves using two or more stand-alone SQL Standard reporting servers and using merge replication and a few tricks to keep the reporting databases in sync along with NLB. Neither of these solutions scale well beyond two servers.

ZER-COST WITH LOW SCALABILITY
This solution is the focus of this article as it most closely resembles how SSRS should work out-of-the-box on a cluster. This scenario involves installing SSRS on both nodes of the existing cluster and creates HA Reporting Services that runs automatically on whichever node becomes active. However, by default the SQL installer won’t let you install SSRS on the second node unless you follow the instructions below.

Automatic failover works with the reporting URL because, when the cluster fails over the active node will automatically take over virtual database name which is the same URL that reporting services will always point. For example, we have an active/passive cluster with computers named CLUSTER01 and CLUSTER02 and the virtual SQL database name of SQLCLUSTER. When CLUSTER01 is the active node and you access SSRS via the URL http://SQLCLUSTER/Reports the request is handled by the active node. During a failover the SQLCLUSTER virtual database name is moved to the new active node and SSRS on this node is now servicing all the reporting URL requests.

Automatic failover for the report scheduling engine works because we have joined the two nodes in a scale-out deployment sharing a common reporting database. We can choose to let Windows Failover Cluster manage the SSRS service failover or run the SSRS engine on both nodes. This option creates some interesting load distribution scenarios which can be found below under the section LOAD DISTRIBUTION OPTIONS.

Instructions

It is assumed you are either starting from a point where you have an installed SQL cluster that never had SSRS installed or installed SQL cluster with SSRS that you wish to add SSRS to the second node. In order to make Reporting Services HA, you need to install Reporting Services on both nodes and configure them to use the same reporting database using scale-out deployment.

ADD SSRS TO INSTANCE
We first need to add SSRS to the existing instance. However, when you attempt to run the wizard to install SSRS you run into a fatal cluster validation failure that stops you. According to Microsoft they do not support adding or removing features to a SQL failover cluster. If you forget to install a feature, Microsoft recommends you uninstall the SQL cluster instance and start over again or install the feature by creating a new instance, but no explanation is provided. Neither of these options are what we want.

Wizard Rule Check Error

  • Rule “Existing clustered or cluster-prepared instance” failed. The instance specified for installation is already installed on clustered computer. To continue, select a different instance to cluster.
  • StandaloneInstall_HasClusteredOrPreparedInstanceCheck – Checks if the selected instance name is already used by an existing cluster-prepared or clustered instance on any cluster node.

 

SSRS1

HOW TO INSTALL SSRS ON EXISTING CLUSTER
In order to install SSRS onto an existing cluster, we need to run SQL setup from the command line using a switch to bypass the cluster rule check that the wizard was complaining about.

Setup.exe /SkipRules=StandaloneInstall_HasClusteredOrPreparedInstanceCheck /Action=Install

  1. Run SQL setup from the command line (as shown)
  2. The wizard will appear and now you can proceed to perform the install adding the SSRS feature
  3. Repeat this for each node that doesn’t already have SSRS installed

NOTE: This procedure has been performed by others for many years to add components to an installed cluster and I found no reported problems.

CONFIGURE REPORTING SERVICES
Now we need configure SSRS on both nodes and join them in a scale-out deployment. For the most part, the standard directions for configuring SSRS scale-out deployment are valid. However, note that this reference assumes you are installing SSRS on separate stand-alone non-clustered servers.

  1. If SSRS is not yet configured on the FIRST node, open Reporting Services Configuration Manager and configure all the settings
  2. Backup encryption key after FIRST node is working correctly
  3. On the SECOND node, now open Reporting Services Configuration Manager and select the local computer name for that node
  4. Select Database and choose an existing report server database and select the same ReportServer database configured on the first node (your shared cluster storage)
  5. Configure the Report Server web service URL
  6. Configure the Report Manager URL
  7. Restore encryption key on SECOND node from backup on FIRST node (important)
  8. Re-open the Reporting Services Configuration Manager connecting to the FIRST reporting server node and select Scale-out deployment where you should see the second node in here with a status of Waiting to Join (you must connect to the first report server to join additional report servers)
  9. Select this server and click Add Server

SSRS Configuration Manager Scale-out deployment

SSRS4

CONFIGURE GENERIC SERVICE
This optional step allows the cluster manager to ensure that SSRS is only running on the currently active node. If you wish to run SSRS concurrently on both nodes see section on load distribution options and important notes on SQL licensing.

Within Cluster Failover Manager, create a Generic Service by right-clicking Role select Create Empty Role

  1. Right-click the new role and select Add Resource a Generic Service and scroll down to select SQL Server Reporting Services
  2. Right-click the newly created service and select More Actions à Assign to Another Role to move this resource to your existing SQL Role.
  3. Select your existing SQL Role and right-click the newly moved SSRS resource and select properties, go to the dependencies tab and add SQL Server (to ensure this service isn’t started until after the database is available)

Cluster Failover Manager should look something like this:

SSRS2

VERIFY

  1. Open the report services URL using the SQL virtual database name in your client browser and verify it works
  2. Force a failover of SQL to the other node
  3. Refresh your browser to test that the reporting URL still works using the same URL and verify that the second node is now servicing all requests
  4. Confirm scheduled report delivery works on both nodes
  5. If you configured the cluster to manage SSRS as a Generic Service, verify the service is running with manual startup and was successfully started on the active node and stopped on the other node

LOAD DISTRIBUTION OPTIONS

  • If you skip the configuring generic service step, you can optionally allow SSRS to run automatically at startup on both nodes. With this scenario all SSRS URL requests will be directed only to the currently active node but all scheduled report processing will be processed by both nodes.
  • When both nodes are running SSRS, you may think that only the active node will service all scheduled report request because it is the only node running SQL Agent to process the scheduled jobs. However this is not true. The way SSRS works when the SQL Job schedule kicks off, it will create a record in the Event table and let the Reporting Service pick up these requests and process them. In a scale-out deployment any SSRS server can process the requests offering a default form of load distribution.
  • Adding a NLB would allow both interactive reporting and scheduled report processing to be distributed.
  • Running any SQL services concurrently on both nodes may require a license for the passive node (see important notes).
  • If running SSRS concurrently on both nodes causes you any difficulty then you can configure the cluster manager to failover SSRS, ensuring it is only running on the currently active node (see Configuring Generic Service).

IMPORTANT NOTES

Interactive Reporting:

  • When both nodes are running SSRS, only the active node will service the URL request WHEN users are accessing URL via the SQL virtual name (recommended).
  • If users access the URL using the node/computer name the SSRS running on that node will service the request (as long as the service is running on that node). This can be a form of ad hoc load balancing, however having users pointing directly to one node will make the URL inaccessible if that node goes down (not recommended).

Service Accounts:

  • The database credentials on both nodes must use a SQL login or domain account so Reporting Service can always access the database on the remote node.
  • The Reporting Server service can be run with either domain or built in network service account, however if you run into Kerberos authentication issues at the URL see this document.

Troubleshooting:

  • If the two scale-out nodes are not identically configured (database or service accounts, email config, etc.) this might result in intermittent failures with report deliveries and the true cause may not be obvious. To troubleshoot subscription problems, you can verify which node is processing requests by querying the ExecutionLog and note the InstanceName that processed the request. You can also exam the SSRS Trace Log files for authentication or other errors.

SQL Licensing

  • Licensing is an ever changing subject so you should consult with your license provider for your particular situation
  • Normally if you have an active/passive cluster, you only need a license for the active server, however, running any SQL services on both nodes may require a license for the passive node
  • SQL Server 2014 Licensing Changes  state that cluster passive nodes must be covered under SA

HELPFUL RESOURCES

Scale Out SQL Server 2008 R2 Reporting Services Farm using NLB
https://www.mssqltips.com/sqlservertip/2335/scale-out-sql-server-2008-r2-reporting-services-farm-using-nlb-part-1/

How to: Configure a Report Server on a Network Load Balancing Cluster
https://msdn.microsoft.com/en-us/library/cc281307(v=sql.105).aspx

How to setup SSRS high availability with Standard Edition
http://pietervanhove.azurewebsites.net/?p=513

Configure a Native Mode Report Server Scale-Out Deployment
https://msdn.microsoft.com/en-us/library/ms159114(v=sql.110).aspx

 

Posted in Failover Cluster, SQL 2008, SQL 2012, SSRS | Tagged , , , | 5 Comments

Accessing Multiple Cluster Instances without Instance Names

Since I was installing a dual instance cluster, I had setup two virtual SQL database names with different IP address and the second instance required a non-default instance name. However, I wanted to access both instances using the virtual name and I found I could do that by setting both of them to listen on port 1433. Now SSMS and other applications can access the second instance using only the SQL server name.

For example,

SQLCLUST2012 – default instance name
SQLCLUST2008\SQL2008 – named instance

Can now be accessed simply by their SQL virtual database name

SQLCLUST2012
SQLCLUST2008

Accessing a clustered SQL Server instance without the instance name.

Note: some problems have been noted with older tools accessing the instance name in this manner.

More details can be found at Ryan McCauley’s blog

Another method for non-clustered servers with multiple instances is to establish an alias for each instance.

 

Posted in Failover Cluster, SQL 2008, SQL 2012 | Comments Off on Accessing Multiple Cluster Instances without Instance Names

SQL Setup Checklist – Because SQL Defaults Suck!

When installing SQL, it is too easy to click next and accept all the default settings. Kevin Boles has preached for years that SQL Server defaults suck! Joseph D’Antoni has also addressed bad default settings in Building Perfect SQL Servers Every Time with his best practices installation script. As a SQL DBA it is best to configure our servers on purpose and not by default. There are a few things can be changed during the installation but the bulk of setup steps occur after the install is completed.

Based on this good advice and best practices, I have continued to refine a setup script and master checklist that I apply to every new SQL server instance. Anytime I do something over and over a few times, I script it and document it. Running this script is the first thing I do after a SQL install and before any production data touches the instance. It isn’t dynamic enough to automatically run on every type and version of server, but it is a pretty quick process that helps keep things organized and consistent across all servers. In a nut shell, the script modifies default settings, optimizes the environment, creates SQL jobs and regular maintenance schedules with email alerting.

Benefits:

  • Every server has a familiar configuration baseline
  • Initial security is consistent at deployment
  • Servers are configured more predictable for easier troubleshooting
  • Standardized monitoring, alerting and reporting systems

This is briefly what my setup checklist looks like. Most of these topics and settings are covered in the video links above.

Setup Checklist

  • During SQL Install
    • Only install necessary features
    • Assign domain account to sysadmin role
    • Set SQL services to run under proper domain accounts
    • Assign proper database, log and tempdb drive locations
  • Modify default settings
    • Set model db to simple mode and adjust auto growth settings
    • Set min/max memory
    • Set MAXDOP and Cost Threshold for Parallelism
    • Set proper # of tempdb files
    • Create startup sproc for trace flags
    • Enable remote admin connection
    • Enable backup compression
  • Modify environment (all pre-scripted)
    • Create service accounts and assign roles
    • Disable SA account
    • Setup DBmail
    • Setup operator and email alerts for severity 16-25 and event id 823-825
  • Install third party scripts
  • Setup SQL Jobs (pre-scripted with schedules and email alerting)
    • Jobs for full user database backups
    • Job for full system database backups
    • Job for index maintenance
    • Job for integrity checks
    • Job for system log cleanup
  • Extra
    • Install latest SP
    • Enable Instant File Initialization
    • Add server to custom monitoring solution
    • Ensure server power management is set to High Performance
    • Ensure anti-virus software is SQL aware
Posted in SQL 2008, SQL 2012 | Comments Off on SQL Setup Checklist – Because SQL Defaults Suck!

Gotchas: Installing SQL 2008R2 on Windows Server 2012R2 Cluster

I was building a new cluster with a dual instance of SQL 2012 and SQL 2008R2. Everything went easy with the SQL 2012 but not so well with the SQL 2008R2 install. As many have found out installing SQL 2008R2 on Windows Server 2012R2 is met with some strange errors due to the fact that the SQL installer was written long before the server OS. Even though SQL 2008R2 mainstream support ended 7/2014, extended support is still available through 7/2019 and it is still in wide use.

Depending upon your environment you may run into all of these problems.

  • “Setup Requires Microsoft .Net Framework 3.5 SP” which needs to be pre-installed before running setup.
  • “Cluster Service Verification” failure as SQL is unable to detect the windows cluster which is fixed by installing a legacy component “Failover Cluster Automation Server”.
  • “Windows Server 2003 FileStream HotFix Check” from installing SQL RTM version which is fixed by creating slipstream install with SP3.

Setup Requires Microsoft .Net Framework 3.5 SP1

On Windows Server 2012R2 the NetFx3 .Net Framework 3.5 feature is not installed however various applications require NetFx3 such as SQL 2008R2 and SQL 2012.

NetFx3 can be installed from the Server Manager GUI but I find the command line option quicker. You will need the original Windows Server source media available in the path specified. See Daniel Classon blog for more information.

[code language=”sql” light=”true”]
dism /online /enable-feature /featurename:NetFX3 /all /Source:d:\sources\sxs /LimitAccess
[/code]

Cluster Service Verification Failed

During the setup support rules check it will fail to detect the cluster because it is using a deprecated way to detect the status of the cluster. In order to get SQL 2008R2 setup to work, you need to install legacy cluster component “Failover Cluster Automation Server” so that the setup can properly detect the existence of the cluster. This can be solved with a simple PowerShell command.

Error encountered upon install of SQL2008R2 on Server 2012R2

SETUP1

Display WFC components and install Failover Cluster Automation Server

[code language=”sql” light=”true”]
Install-WindowsFeature -Name RSAT-Clustering-AutomationServer
[/code]

SETUP3

I found excellent information here that covers the whole topic.

Windows Server 2003 FileStream HotFix Check

If you started the install using SQL 2008R2 RTM version you will make all the way to the last step where you are ready to install the files and you’ll receive an error stating that you don’t have the required Windows 2003 FileStream HotFix installed (on your Windows 2012R2 server?).

Fixing this error requires at least SP1, but you were installing from original RTM distribution and planned on installing the service pack afterwards. Getting around this dilemma involves performing a SQL 2008R2 slipstream install. I modified the instructions I found to make this work with SP3. This SQL 2008R2 SP3 slipstream installer can be used on any future install for any cluster or standalone server.

SETUP2

CREATING SQL 2008R2 SP3 SLIPSTREAM INSTALLER

  1. Copy original SQL Server 2008 R2 source media to C:\SQL2008R2_SP3
  2. Download the SQL Server 2008 R2 SP3 for both x64 and x86
  3. Extract each of the SQL Server 2008 R2 packages to C:\SQL2008R2_SP3\SP
    1. SQLServer2008R2SP1-KB2528583-x64-ENU.exe /x:C:\SQL2008R2_SP3\SP
    2. SQLServer2008R2SP1-KB2528583-x86-ENU.exe /x:C:\SQL2008R2_SP3\SP
    3. Ensure you complete this step for both architectures to ensure the original media is updated correctly
  4. Copy Setup.exe from the SP extracted location to the original source media location
  5. Copy all files (not the folders)
    1. from C:\SQL2008R2_SP3\SP\x86 to C:\SQL2008R2_SP3\x86 to update the original files
    2. from C:\SQL2008R2_SP3\SP\x64 to C:\SQL2008R2_SP3\x64 to update the original files
    3. except the Microsoft.SQL.Chainer.PackageData.dll (do not copy this file)
  6. Determine if you have a DefaultSetup.INI at the following locations:
    1. C:\SQL2008R2_SP3\x86
    2. C:\SQL2008R2_SP3\x64
  7. If you have a DefaultSetup.INI at the above locations, add the following lines to each DefaultSetup.INI:
    1. PCUSOURCE=”.\SP”
    2. If you do NOT have a DefaultSetup.INI, create one with the following content: ;SQLSERVER2008 R2 Configuration File [SQLSERVER2008] PCUSOURCE=”.\SP”
    3. and copy to the following locations C:\SQL2008R2_SP3\x86 C:\SQL2008R2_SP3\x64
    4. This file will tell the setup program where to locate the SP source media that you previously extracted
  8. Run setup.exe as you normally would

Future Compatibility Woes

This is not the first time SQL installs presents a compatibly warning when attempting to install on newer versions of Windows Server.  Good news is Microsoft seems to have addressed this in SQL 2012 using integrated Slipstream technology within the installer. The installer introduces Product Update functionality automatically into the setup process.  However, we’ll have to wait for Windows Server 2018 or 2020 to see how well it works.

Posted in Failover Cluster, SQL 2008 | Tagged | Comments Off on Gotchas: Installing SQL 2008R2 on Windows Server 2012R2 Cluster