Integrating Salesforce and Rails

At Info-Tech we store all of our customer data in Salesforce, this includes things like subscription information which we use to determine what content on our website each client should have access to. This means that we need access to a lot of our Salesforce data from our website, which is written in Ruby on Rails.

Current Integration Method

Our current method of near real-time integration involves tracking changes of data that the website requires in Salesforce, and a Windows service which polls for changes, queries for a subset of the changed records and then sends them to the website via web service calls. It then goes back to Salesforce and marks any successfully integrated records as up-to-date, and leaves any failed records to try again. This has been working relatively well for us for the past year, but it has some issues.

  • Environment
    • It requires us to maintain an extra Windows server for our production environment, plus an addition server to support the rest of the development/qa environments.
  • Language
    • It is one of the only applications we have that is written in .NET and there are only a few people in the department who have ever looked at the code.
  • Efficiency
    • Since it polls for records, it uses API calls regardless of whether or not there is any data available.
  • Reliability
    • It is somewhat prone to “clogging”, since it only pulls a specified number of records at once, if enough records fail to integrate for any reason it will eventually get to a point where it is constantly trying to push these failing records and no other data will make its way through. This results in no data at all moving until someone unclogs the pipes.

Making It Better

Recently, we discovered some data issues on our website. What we realized was that when Contacts were merged in Salesforce this wasn’t being translated properly to the website. Instead of handling this data scenario within our existing integration system, we took this as an opportunity to come up with a new method for integration.

What we settled on was a method that used two features of Salesforce that we hadn’t used together before then. These features are Workflow and Outbound Messaging. Workflow is a configuration based way of triggering some action on Create, Update or Delete of a record, and Outbound Messaging allows you to make a SOAP based web service call as a part of a Workflow rule.

Outbound Messaging has several features which make this an appealing approach. It will queue messages together and send them in batches of up to 100. It will also ensure message delivery, if it doesn’t receive a success response, messages will retry at increasing intervals for up to 24 hours. There is one downside, which is the fact that it won’t make a pure REST based call, you have to send a SOAP message. We settled on a simplistic approach of using a lightweight XML parser to rip out the bits of the SOAP message that we care about.

The New Pattern

What we implemented for this Contact merge scenario was whenever a Contact merge is detected in Salesforce, an Outbound message is triggered via workflow containing the IDs of the two records being merged together. A Rails web service takes this message and pulls out all the pairs of Contact IDs, performs a merge of each pair, and returns either success or failure to Salesforce. This method was simple to implement, had no dependency on external services and was much easier to maintain then the .NET polling method.

Making It Betterer

Once we got this working, we went over it and identified some ways we could make it a little more efficient and robust. We abstracted the processing of the SOAP messages, so that we could process any SOAP outbound message from Salesforce and convert it to a hash. Now when our Rails web service receives the message it parses the SOAP, pulls out the record Id’s and places them in a queue for processing. Once all of the records have been queued for processing, we return a success response to Salesforce, so that the Outbound Message will be removed from the Queue.

From there, a worker process that monitors the integration queue pulls the Salesforce IDs and queries the Salesforce API for the contents of those records. We update the data in the web database, remove the ID from the queue, and the process is complete.

This new process seems to solve all the issues that we had with our old method of integration:

  • Environment
    • No more external server, everything is either contained in Salesforce or in our Rails codebase.
  • Languages
    • No more .NET, everything is either Salesforce configuration, or Ruby.
  • Efficiency
    • Since Salesforce pushes notifications of data, we don’t have to use API calls to check for modifications.
  • Reliability
    • Even if there are records that can’t be integrated for any reason, they will never prevent other data from being replicated.

We now have a robust, relatively simple integration method that we can start to roll out for all of our Salesforce -> Rails integration needs.


Ruby on Rails and the Shift to TDD

If you had asked me a year ago what my thoughts on Test Driven Development (TDD) were, you would have heard something like “I just don’t get it.” I never understood how writing tests could improve your velocity or make you more productive.

At Info-Tech, we have transitioned to Agile development. One of the biggest aspects of Agile is Test Driven Development. Our CIO would always talk about TDD and how he wanted us to get there. He insisted that we should be writing tests before we start coding. I was not sold and didn’t see how we would ever achieve this goal. When we made the decision to move to Ruby on Rails over 2 years ago, we were learning Rails and Ruby as we went and tests were an afterthought. How were we supposed to do TDD on code that had no tests to begin with?

At Railsconf 2011, I noticed that all the “cool kids” were talking about TDD. I listened to how all the leaders in the Rails community were touting how wonderful Test Driven Development was. TDD was a big deal. Rails was even built from the ground up with testing in mind. Perhaps I wasn’t really the Rails guru I thought I was. If I wanted to come close to being as good as some of these guys, I would need to figure out this testing thing.

The following video I think had the most profound effect on me. Lisiting to Corey Hanes talk about Test Driven Development:

He talks so elegantly about how you don’t need to dive right into writing your tests first. The first step is simply thinking about tests. This made me feel better and made things not seem as daunting … and what he said makes total sense to me. This one 10 minute video changed my views on testing.

Great, so now I am sold on testing. I get the value of them. But, how to we get there?

One of our biggest issues, was not knowing how to test effectively (or how to test at all). RSpec allows you to test all sorts of things; models, views, requests, routes, helpers and controllers … that’s a lot of tests to write! However, are they all nessessary? I don’t think so, and neither does Ryan Bates, see how he tests here:

Enter Capybara

Capybara simplifies intergration testing and actually simulates how a real user would interact with your web application. Here is an example test that makes sure a user can log in:

 describe "Authentication request" do
   it "logs a user in"
     visit login_path
     fill_in "username", :with => ""
     fill_in "password", :with => "secretpassword"
     click_on "Sign in"
     page.should have_content("You have been successfully signed in.")

Very simple and extremely powerful! When we write these request specs (or integration tests) we are actually testing the full stack of our app. No stubbing or mocking. It is exactly like having a human click through doing regression testing. We can even test Javascript functionality! Write your integration tests for what experience a user should have, these are the most important tests to have. The unit tests (model specs) will flow from there.

The Light Goes On, Our Team “Gets It”

We took the first steps to pure TDD on a project I am currently working on. We had all the tools, we just had to force ourselves to use them. The biggest pain point was setting up our test data and factories … but the good thing is, you just need to do all that upfront work once. Then testing becomes easier.

There were 4 of us working on the project, pair-programming, all writing tests. At first we thought it was really cool when we had lots of green tests passing. But, the coolest thing happened at the end of our first week of writing tests. The tests started breaking!!! They were actually catching bugs as we continued to develop! On more than one occasion these tests were saving us from promoting bad code! The more tests we write, the more solid our code is going to be in the future. Knowing your code is wrapped in a bunch of tests, ensures that your code will always do what it is was intended to do!

I’ve drank the Kool-Aid, I write tests first, and I am loving it! If anyone is reading this and has doubts, do me a favour and take the time to start thinking about tests. Take baby steps. The first time your tests actually catch bugs, BEFORE they hit production, you will be sold! Because when that happens, you have truly increased your velocity and you have earned the right to call yourself agile :). Meta-data Bug

Internally we use Informatica to synchronize data between our internal SQL Servers and Informatica offers two flavors of data integration; Data Synchronization and Data Replication.

Data Synchronization offers a higher level of flexibility between sources and targets when integrating data, but also demands more maintenance. For example, when copying an object from to a database table, if I remove a field from which is included in this synchronization task, I need to update the synchronization task or it will start to fail. A similar thing happens when I add a field in SalesForce. I cannot add a new field and have it automatically be added to the table in SQL Server, I need to open the synchronization task and make changes.

Data Replication addresses this issue. It will automatically copy the schema to your target table and incrementally replicate both data (Inserts, Updates and Deletes) and meta-data changes. For our purposes of bringing data into SQL Server, crunching numbers and pushing the result back to SalesForce, the Replication task is far less maintenance and a better fit.

It was during our implementation of this product that we unearthed a problem. It seemed like Informatica was unable to bring some of our accounts down to SQL Server because of a rounding issue. So, Ron opened a ticket with both Informatica and After much finger pointing and back-and-forth between the two, we had a phone call with everyone on the call. The result of the call identified a bug in

The bug is a result of using a roll-up field on our account object to sum a currency field of a child object. The resulting field is returned via the Meta-Data API in salesforce as having a precision of 4 decimals but the data is returned with many more decimal places when retrieved through both APEX and the SOAP API. This means that Informatica is expecting four decimal places yet is being given many more. This results in Informatica failing to bring the data into SQL Server.

The good news is that Data Synchronization seems to overcome this issue somehow. So, we use both a Synchronization and Replication task for objects this problem occurs on. We create a Replication task which brings down all data for an object with the exclusion of any currency fields. We then create a Synchronization task which brings down the remaining fields and performs an update on the SQL Server table. Unfortunately, this decreases the maintainability of the solution.

The bug was sent off to the R&D team almost six months ago. I recently asked my SalesForce Account Specialist for an update on this bug. The response left me bewildered at the absurdity. “There are currently no plans to fix the bug”. This I can interpret as they know about it, but it has not been assigned to anybody to work on. I can understand work needs to be prioritized but having it followed with “You can always post it on the Idea Exchange and try to have it voted up” is amazing to say the least.

Let me see if I have this straight, you want me to take a BUG I have brought to you, you have acknowledged, sent off to your R&D team and put it in the IdeaExchange? I did not realize BUGs were new feature requests. I should take a lesson from this and see if I can implement a similar approach in our department. Any client login issues should be treated as new feature requests and require executive sponsorship to work on. Oh wait, that is dumb, I better not try that.

Oh SalesForce, please tell me that you have not grown to the point where you are overcome with so much bureaucracy and red tape that you cannot identify how ridiculous it is for you to take so long to fix this bug.

How Security Relates to Your Brand

This is primarily intended as a technical blog, however, every now and then there are issues that arise which require us sun-deprived tech people to grudgingly think outside of the tech-box. We sometimes act like our tech world is less like a box and more like one of those underwater shark-proof cages — we use tech to keep the business people out. Well, time to head out there into the scary business waters where all those suits swim for a minute.

Check your tanks and let’s go… 🙂

Swim With The Fish! (Credit: TANAKA Juuyoh (田中十洋) - Wikicomons)

Seeing Security From Within The Business Waters

One issue that I often see coming up over and over again is a misunderstanding of how a very small security issue can affect the business, specifically in the area of trust. I’m not talking about larger issues like a SQL injection vulnerability, or improperly configured servers, which are always understood by technical people as things that must be solved ASAP, but I’m talking about the really simple things such as URL forwards (sometimes called Open Redirects or Open Forwards.)

Let’s look at an example to see how that relates.

What I mean by URL forwards are situations where — in the web world — we would use a URL parameter to keep track of where we want to send a user next. Sometimes, for example, we may have a content page such as the following for what we will pretend is a site at our company, called “”

Now let’s pretend this site is a content site, and let’s also imagine this page might say “this page is available for subscribers only, so please login!” However, we probably want to send the user to a common login page, but at the same time we also need to get the user back to where they were so that they can enjoy the article they paid for. How do we remember where we should send them back to such that the user can finish reading her content?

Well, a common technique is to save the current URL in a parameter, and get back to the page once the user is authenticated:

What a Hacker Sees

The problem with URL parameters is that they are not read only. I could just as easily type this on the command line, and if the parameters are not checked properly to ensure they cannot be over used to get to other sites, this might forward the user somewhere else besides our site. For example, this might forward one to Google immediately after login:

Well, there’s no apparent harm from this for a couple reasons, and that’s usually what is heard when discussing this sort of issue:

  1. Someone would (apparently) have to have access to your browser to type that stuff up there.
  2. Even if someone did forward you to another site, what’s the harm in that?

Now, the first point is only sort of true: This is a “browser side” problem, however, there are numerous ways to get a link in an unsuspecting user’s browser. Let us imagine for a moment that we are Mary, one of’s clients. As a manager at an important company, she is alert and astute. But busy. Really busy.

If, like many content sites, you send occasional mailout’s reminding people that you’re alive, a Bad Guy could send a professional looking email to Mary (among millions of other people) in an attempt to convince them to click on a link like this one:

As a subscriber, she might think these are new articles to a product she knows she subscribes to. She might be interested in clicking on one of the links.

If you look at the above link, especially on the left side as we all do, it looks pretty much harmless. This is the important part: Because the link really originates from our real site, it is not obviously a problem. And this is why this is a real problem for those business aware suits I mentioned off the top: This Bad Guy has hijacked Mary’s trust of our brand. More on this later.

Mary clicks. She goes to the login page, and she logs in.

From that, an alert hacker could maybe convince Mary that her login actually didn’t work, so please try again — but on a page that looks exactly like the last one (which was real) but now is actually a Bad Site — resulting in her credentials being stolen.

As an aside, there are lots of tricks to hide the data on the URL, so it’s harder to see what it’s doing. Thus URL is actually identical to the more obviously bad one, above:

Why Mitigation isn’t the Only Point

Technically speaking, there are things that can be done to fix this. We may do a blog post on scrubbing techniques for this sort of a situation, but fundamentally it just needs to be valid data when it comes in from URLs. This isn’t what I wanted us to learn, out here among the business sharks.

As I said off the top, the technical solution isn’t really all that difficult, however, convincing technical people why something which is this “laterally risky to the website” can be important enough to solve as a high priority, is: Sure, there are security risks, but in this day and age, that’s only part of the problem. The more serious issue, as I see it anyway, is that Mary will no longer trust our company once her credit card has been used, or once her personal data has been skimmed.

Fundamentally, the problem you need to beware of isn’t just what can be done. It’s also about how bad someone can make your brand look with a post to a website after they have found issues. Depending on your business — it’s what people can tell the world to make your business look insecure in an age where competition is fierce, and trust in your brand is very, very valuable.

Okay, you can get back in the shark cage now. 🙂

Ron Matuschek ( @RonEm on Twitter) is a software developer who has been swimming outside of the shark cage for more years than he’d care to admit. 🙂

A Pattern for Asynchronous Data Replication Between Related Objects

It is always a challenge when trying to work around limits in This particular limit we were trying to work around was not being able to create a roll-up summary field with a formula field used as filtering criteria. Lets examine a scenario to start.

Here at Info-Tech, we sell subscriptions to our research. To allow for this we created a custom subscription object. The subscription has several dates on it (Start Date, Expiry Date, End Date, etc.). These dates determine the status of a subscription (Active, Expired, Cancelled, etc.). In order for a contact to have access to a subscription we create a child object of both the contact and subscription called membership. This membership object is created with a master-detail relationship to both contact and subscription because both the subscription and contact relationship fields are set at the time of record creation and never change. A membership also has a start and an end date which determine the status of a membership (Active, Inactive).

To determine the effective status of a membership we must consider the dates from both the subscription and the membership. That is to say that a membership is not active if the subscription has been cancelled. To achieve this we create a formula field on the membership to tell us what the effective status is.

With all this  in place, we want to create a roll-up summary field on the contact which is a count of the active memberships a contact has. Active memberships have an effective status of active. Unfortunately, the effective status is a formula field and can’t be used to filter the roll-up summary. To make it work, we need to have a non-formula field which stores the effective status for the membership and keep that field up to date whenever the status changes.

Membership Model

Since the effective status of a membership is based on its parent subscription object, we need to update the non-formula field every time both the subscription and membership objects have their status change. Pushing an update from the subscription object down to the child membership objects could result in SOQL governor limits being exceeded so we can’t do this directly in an APEX trigger.

Enter the @future method. The solution is to keep a text copy of the subscription status on the membership and keep this in sync with an asynchronous method every time the status on the subscription is updated. So, we create a trigger on the subscription object which makes a call to the future method on update.

A few things to consider when making this method call in a trigger are 1) the execution of a trigger can happen multiple times in a single transaction if workflow is present, 2) a batch can have any number of subscriptions in it, 3) You cannot call a future method within the context of batch APEX.

For the first point, we wrap the future method call in a check on a static boolean variable. If the boolean is false, we make the call and set it to true. This prevents multiple executions of the future method within a single transaction. For the second consideration we put a formula field on the membership to tell us if the text status on the membership differs from the subscription. This allows us to query all membership records where the formula is true, thus eliminating the need to pass any information to the future method about the subscriptions being updated. Finally, we inspect a boolean value we set at the start of our batch jobs and, if it is true, we do not call the future method.

trigger SubscriptionStatusUpdate on Subscription__c (after update)
Two checks to perform here
i) Ensure that if we are executing within the context of a batch, 
we don't try and call the future method as it will throw an exception
ii)Ensure that is we have already called the future method once, we do not call it again
if(TriggerControl.syncInsideBatch == false &&
TriggerControl.futureMembershipMethodCalled == false)
/* Future Method call to update memberships*/
/* Ensure this future method call is not performed again */
TriggerControl.futureMembershipMethodCalled = true;

The future method simply selects all membership records which have their formula set to true and iterates over the list, updating all the status fields from the subscription object. Two additional scenarios we have considered in our code are the maximum number of records we can update and the failure of records when updating the batch. To ensure we stay under our SOQL limits in the batch, we limit the return to 2000 records. To allow for some records to succeed in the batch while others fail, we use the Database object Update method and inspect the return value.

public class UtilitySubscriptionMethods
//This Future Method is used to process EffectiveStatus changes. 
//It is called, and then run whenever there is processing time available. 
public static void synchronizeStatusesOnMembership()
/* Set to avoid any potential for additional future methods being called*/
TriggerControl.futureMembershipMethodCalled = true;

/* Get a list of all memberships pending sync */
List<Membership__c> members = [SELECT
RequiresSync__c = 'Yes'
LIMIT 2000];

List<Membership__c> updatedMembers = new List<Membership__c>();
Integer counter = 0;

for(Membership__c currentMember : members)
//Update the SubscriptionStatus value
currentMember.EffectiveStatusValue__c = currentMember.EffectiveStatus__c;
currentMember.SubscriptionStatus__c = currentMember.Subscription__r.Status__c;

//set the member as updated.

if(updatedMembers.size() > 0)
//Use the Database.update method to allow for partial successes
Database.SaveResult[] lsr = Database.update(updatedMembers, false);
String errorMessage = 'The Membership Status Sync threw the following exceptions: ';
Boolean errored = false;

//Record any error messages
for(Integer i = 0; i < lsr.size(); i++)
Database.SaveResult sr = lsr[i];

Database.Error err = sr.getErrors()[0];

errored = true;
errorMessage += err.getMessage() + '\n ';
errorMessage += 'id: ' + updatedMembers[i].Id + '\n';

throw new CustomSyncException(errorMessage);
//If there was a partial failure on the update then e-mail notification but do not fail the batch
catch(CustomSyncException e)
UtilityGeneralMethods.CreateBatchErrorEmail('synchronizeStatusesOnMembership - Record Update Errors: ', e.getMessage());
//Any other unknown exception will cause an error in the batch
catch(Exception e)
UtilityGeneralMethods.CreateBatchErrorEmail('synchronizeStatusesOnMembership - Batch Exception: ', e.getMessage());
throw e;

We have used this pattern in several places within Although it has facilitated the realization of our goals, you should evaluate the fit whenever you want to use it as the execution of @future methods has a daily limit.