Category Archives: Beginner

Introducing Chimpegration Cloud for Raiser’s Edge NXT

When NXT was announced at BBCon 2014 there was some initial confusion as to what it would mean for Raiser’s Edge users. I have to be honest, at the time I was underwhelmed. I could see the logic of moving gradually to the cloud but it wasn’t exciting. As the years have gone by and NXT has matured and developed I have begun to see the benefits.

Of course DBAs still spend a lot of time in the traditional database view and there is a lot of impatience for NXT to catch up so that the web view goodness is delivered to those who are heavily involved with Raiser’s Edge.

As developers we have had high hopes for the web based API. All of a sudden we could break out of the plugins area and develop Chimpegration’s potential.

Chimpegration Cloud follows a similar pattern to Raiser’s Edge NXT. Some features are found in the database view plugin and also found in the web Cloud version. However where Chimpegration Cloud shines is with the new possibilities available.

Up until now it has only been possible to schedule processes with on-premise installations of Chimpegration. As standard, it is now possible to collect bounces, unsubscribes, opens, clicks and more on a regular basis. Set it up and let it run. Have it run at night so that the results are ready for when you arrive in the office in the morning. Or sit with your coffee and be mesmerised as the number of processed records increases.

Q. What could be better than scheduling a process?
(That was a rhetorical question I wasn’t expecting anybody to put their hand up)

A. Real-time processing. What does that mean? Simple. You set up an action so that as soon as a constituent subscribes, bounces or unsubscribes it feeds directly into RE!

This is just the beginning. Chimpegration Cloud is waiting for the SKY API to catch up! We want to offer exports (the functionality is in place for Altru and BBCRM so come on SKY API team give us data export!) and we are working on import.

We are the masters of our own destiny servers. This means that if we see that processing is going slowly we can ramp up the power. You no longer have to share your Chimpegration processing power with others.

If you are interested (and who wouldn’t be if you have managed to read this far) then why not take a two week trial or get in touch to ask us a question.

Introducing Chimpegration Cloud for Altru

Do you use Mailchimp? Or even if you use another email marketing tool what is your process?

After writing and rewriting the text, designing and redesigning the layout, adding graphics, seeking feedback and finally sending out one of the greatest appeals you have ever done, you wait for the responses.

Of course the best type of response is the donations that come flooding in but the other kind, the ones that count towards determining your constituent segments and who to target in the future, also arrive. Every bounce, unsubscribe, open and click helps you to work out who is going to become that next major donor and who does not want to hear from you again (maybe your choice of dancing cats in the appeal email should be revisited the next time around!)

You pull up a report in Mailchimp and painstakingly update Altru with those unsubscribes and bounces; do you really have time to add the numerous clicks too?

With Chimpegration Cloud, you set up your process to retrieve different results. Track your opens and clicks by adding interactions. Mark your constituents as ‘do not contact’ or add an attribute if they unsubscribe. Set the email to an invalid email address for bounces or go crazy and do any combination of the above.

You can also export your records to Mailchimp to start with. Set up a query and push the constituents out to Mailchimp.

You can do all of this as a one off or you can set up a schedule so that it runs at night with everything ready for you when you walk through the office door the next day.

Q. What could be better than scheduling a process?

(That was a rhetorical question I wasn’t expecting anybody to put their hand up.)

A. Real-time processing. What does that mean? Simple. You set up an action so that as soon as a constituent subscribes, bounces or unsubscribes it feeds directly into Altru!

This is just the beginning. We are constantly updating the application and adding new features. Look out for import soon.

If you are interested (and who wouldn’t be if you have managed to read this far) then why not take a two week trial or get in touch to ask us a question.

Introducing Chimpegration Cloud for BBCRM

Do you use Mailchimp? Or even if you use another email marketing tool what is your process?

After writing and rewriting the text, designing and redesigning the layout, adding graphics, seeking feedback and finally sending out one of the greatest appeals you have ever done, you wait for the responses.

Of course the best type of response is the donations that come flooding in but the other kind, the ones that count towards determining your constituent segments and who to target in the future, also arrive. Every bounce, unsubscribe, open and click helps you to work out who is going to become that next major donor and who does not want to hear from you again (maybe your choice of dancing cats in the appeal email should be revisited the next time around!)

You pull up a report in Mailchimp and painstakingly update BBCRM with those unsubscribes and bounces; do you really have time to add the numerous clicks too?

With Chimpegration Cloud, you set up your process to retrieve different results. Track your opens and clicks by adding interactions. Mark your constituents as ‘do not contact’ or add an attribute if they unsubscribe. Set the email to an invalid email address for bounces or go crazy and do any combination of the above.

You can also export your records to Mailchimp to start with. Set up a query and push the constituents out to Mailchimp.

You can do all of this as a one off or you can set up a schedule so that it runs at night with everything ready for you when you walk through the office door the next day.

Q. What could be better than scheduling a process?

(That was a rhetorical question I wasn’t expecting anybody to put their hand up.)

A. Real-time processing. What does that mean? Simple. You set up an action so that as soon as a constituent subscribes, bounces or unsubscribes it feeds directly into BBCRM!

This is just the beginning. We are constantly updating the application and adding new features. Look out for import soon.

If you are interested (and who wouldn’t be if you have managed to read this far) then why not take a two week trial or get in touch to ask us a question.

Name Splitting in Importacular






Every so often we get a support question from a user asking us how they can import data like the following that appears in one Excel column:

“Dr David A Zeidman PhD”

We have invariably told them that this is very difficult to manage and that they would have to manually break up the one column into the 5 separate components (title, first name, middle name, last name and suffix) so that they could map them.

With Importacular 3.5 (available now for self-hosted organizations and coming within an indeterminate period of time for Blackbaud hosted users) you are able import combined fields like this.

The new constituent area settings allows you to split one field on your incoming file or data source into parts. The logic takes into consideration common titles, first name and last name (taken from US survey data) as well as suffixes. It also handles multi-word last names e.g. Von Trap or De La Fuente.

What is the best part of this? There is absolutely no extra cost to use this feature. It is included as standard irrespective of whether you have purchased any other data sources.

Download the latest version of Importacular now!






GDPR and Consent in Raiser’s Edge






I have been really busy of late. While many an EU non-profit have been kept awake at night because of GDPR, I have not quite been kept awake but nevertheless been very involved in implementing GDPR into our products. This has taken the form of the new consent module.

If you are not in the EU or otherwise not on the latest version of RE7 then you may wonder what I am talking about when I mention the new consent module. I am not going to go into too much detail here as Blackbaud have some good resources that cover it here.

What I will say is that the implementation of consent is very different from many other modules in RE. Luckily there is less and less scope for RE7 API developers as Blackbaud moves towards NXT and expands the REST based SKY API. So I am wondering if as, a last challenge towards those remaining in the RE7 API game (myself included), Blackbaud decided to make the new consent module even less consistent than previous modules.

Here are a few of its features:

  • The consent collection cannot be found within the regular BBREAPI assembly. You have to look elsewhere for it.
  • It does not save alongside the rest of the constituent records but has its own save routines. What this also means is that the VBA events are not fired when a consent record is saved.
  • In the first release it does not cause an exception when you do not supply a valid combination of channel and category as it does in the UI but if you are really clever (or decipher the sample code), you can determine how to do your own validation.

Now I should not be too harsh on the BB developers. Introducing a new module like this is extremely difficult. There are so many intertwined areas that must be accounted for and I am sure that the design decisions were taken for a reason. (One of which being that it is much easier this way to work with very many consent records if they are a standalone entity)

How are we updating our applications to work with the consent module?

Audit Trail:

As you would expect changes made on consent records will be tracked but because consent records are saved as a standalone entity they will only be saved if the constituent record is also saved afterwards.

Validatrix:

This is a tricky one. We have included consent records as part of Validatrix but because they are a standalone entity and are open and saved in their own rights, they do not fire the VBA events that tell Validatrix to prevent a save. That means that a user can add a consent record and shut the constituent without saving the constituent. You cannot therefore have a consent record as the primary criteria. However, if you have the consent record as a dependency of a constituent based field then it will be included in the criteria when you save the constituent.

Importacular:

As you would expect, Importacular allows you to import consent records. You can match on any combination of channel, category, date, response and source to ensure that you are not creating duplicate consent records (although by default it matches on channel, category, date and response).

Chimpegration:

This is perhaps our most ambitious development. Until Blackbaud add consent information to query and export you are not able to export consent records to MailChimp. However it is probably more useful to export the outcome of the consent records i.e. solicit codes which show you a good picture of a constituent’s intentions.

On managing campaigns you can add a consent record based on the action i.e. if a subscriber unsubscribes you may want to add a consent record.

Sync is where the most complex piece of development occurs. We allow you to map individual groups and group items to the addition of different types of consent records. Equally when specific solicit codes are added (in response to consent records being added previously) these can be mapped to group items. We have a longer description of this on our knowledgebase.

When is this available? Importacular, Audit Trail and Validatrix are already live. Chimpegration is live for self-hosted and will go live in the near future for hosted organisations.






Moving over to SKY API






This blog has mainly consisted of the COM based RE7 API and my musings of all things Blackbaud related. I am sure that the latter will no doubt continue but I have realised for a while now that as time goes on the RE7 API is becoming less and less relevant (although not entirely so) and that there is a natural progression towards the SKY API.

In general there is probably less to be said about SKY API. Firstly Blackbaud have been doing a much better job at documenting it than they ever did with the RE7 API. They are also putting a lot more thought into it so that there are far fewer inconsistencies (so far at least) than there ever were with the RE7 API. Where there are difficult, new, areas they have written up good documentation or blog posts to explain. In short they have made my job here somewhat redundant… Well thanks a lot Blackbaud!

Actually, yes, thank you. I would much rather a clean usable API than one where I have to write up blog posts explaining how to do things. I am sure that there will be moments but there will quite possibly be fewer of them.

One topic that I think deserves some discussion though is the porting of existing functionality from RE7 to SKY. Those of us that have products written for RE7 are keen to see the functionality available on SKY in order that we can port the solutions over.

I have been very keen to transfer Chimpegration but one of the stumbling blocks has been the lack of bulk data processing on SKY. Specifically the ability to return filtered lists of constituents. On many platforms this means simply specifying a last data changed or a keyword search. On RE7 though it is possible to make use of a query to retrieve that information. The user would themselves set up the criteria in the query and the application would allow the user to select that query.

There is some discussion on the SKY API forums about how imperative it is that this be ported over to SKY and that we should be allowed to once again select an existing query.

Despite really needing this functionality for Chimpegration, I am not convinced that this is the best course of action for SKY. This new API should embrace a general approach to this problem. It cannot be based on RE7’s query module. There is definitely a need to generate lists based on complex filters and criteria and that should be exposed somehow to the developer community but to simply port the existing RE7 functionality to SKY would be short-sighted and not take into consideration the other applications what will one day make use of the same API. I want an API that will work with RE, BBCRM, ETap and others. To simply port queries to SKY would confuse the issue and make for an API that is not consistent.






Our MailChimp API Headache






It has been a while since I last posted on this blog. The main reason is that we have been so very busy for working and struggling with version 3 of MailChimp’s API.

Around two years ago we became aware of MailChimp’s latest API. It was certainly a very good example of a clean REST based API having all the characteristics of a well designed interface. All of the structures made sense and the anybody used to working with REST based APIs would have no difficulty in picking up this new API and creating a new powerful application with it.

So what went wrong?

About 18 months ago MailChimp informed the community of developers that it would be retiring all their other APIs in a year’s time so that they could concentrate on v3 as being the sole API. This meant shutting down not only V2 and older instances but also the export API.

This was a really big deal for us as Chimpegration made solid use of V2 and definitely the export API.  When we first gave MailChimp a demo of Chimpegration they told us that it was one of the most complex and intricate applications making use of their API. They were blown away by the detail available in synchronizing and the level of user choice that made up the application. What is more many of the features that exist today were not available at that time including the synchronization of groups, use of profiles and the ability to remove subscribers based on a query.

The news that MailChimp were going to shut down their API came as a shock to us. It meant some very big changes in Chimpegration without much to show for it. How can you possibly sell the fact that Chimpegration now uses V3 instead of V2. Nobody cares as long as the application carries on working.

What was clear to us when we started working with V3 was that those who designed this new API really had not spoken to anybody else about it. It was as if they had been working in a isolated area of the building, kept away from any V2ers or for that matter, existing users of V2.

They changed the structure of areas of the API such as groups. (The ids of V2 groups no longer match the ids of V3 making it very difficult to transfer to the new version unless we only consider the names of group items rather than the ids). They added a batch method so that you could send very many calls to MailChimp but they had no idea what the maximum allowable would be. We would wait, send through our batch updates and wonder if it would time out or not. They changed the unique identifier for a subscriber so that it had no bearing on the previous unique identifier. This meant that clicking on a link to bring up the contact in MailChimp no longer worked because the unique identifier no longer existed. (This has now been fixed, or at least we are redirected to the correct contact).

One of our biggest headaches which is still ongoing is around segments. The new API made it very difficult for us to look up a subscriber by constituent id (The Raiser’s Edge unique identifier). This, we had always insisted, be stored as a merge variable. Of course if we were starting from scratch we would use the new MailChimp unique identifier and store that in Raiser’s Edge but this would be an enormous change and one that would not work with existing organizations.

Previously we would feed in a segment into the method that would get the subscriber details. This segment would retrieve the subscriber by constituent id. This was removed and with it an enormous stumbling block was put in front of us.

Some relief was forthcoming about a month before the cutoff deadline. MailChimp announced that they would not be shutting down the export API. Clearly it was realised that the batch methods were just not good enough for large data transfers. This was important news for us as we relied heavily on this part of the older API.

It still leaves us with the issue of segments. The V3 developers did not look at the V2 segments. Instead they just changed the format. The problem, or course, is that the export API takes in segments in the V2 format. It is all very well fetching the user’s existing segments and then passing them into the export API (an extremely powerful feature that we have been making regular use out of) but all of a sudden we have to adjust each segment in order to convert it to a version that the export API understands.

What we would like MailChimp to do:

  • Either expose the V2 segment definitions again or allow us to pass in V3 segments into export. Either way should not mean that the existing method is just turned off. We need to time to convert and test.
  • Update the export API so that it is fully compliant with V3 (again keeping the existing calls backward compatible)
  • Produce a guide to transitioning from v2 to v3. I have been asking for this since we started and amazed that this was never forthcoming.

 

To our Raiser’s Edge Chimpegration customers: thank you for your patience. We realise that you do not care who is to blame for issues with Chimpegration as long as the application works. We understand that and, as ever, strive to release fixes to issues as soon as we are told about them. After much work we hope that we have reached a point where this latest version of Chimpegration is now more stable than ever. We continue to work on the plug-in to ensure it remains as useful as possible in integrating these two great applications.






My BBCon 2016 Sessions to Watch Out For






BBCon 2016 (Blackbaud Conference for Nonprofits) is soon here. We are back in Washington DC again this year and whether you are a first timer or a regular, you will surely be as overwhelmed as I was by the choice of content available. That is why, once again, I have sifted through the breakout sessions looking for my top tips.

As you may have noted there is one session less in my list this year. I won’t be speaking formally at any breakout session but all is not lost. I shall be at BBCon so if you would like to hear me speak, just approach me and either hover creepily beside me listening to my dulcet English tones or (somewhat more appealingly) engage me in conversation.

Anyway enough gratuitous self-promotion and on with my list of sessions.

My choices are split into two concise categories and one less than concise:

a) Sessions whose speakers are amazing

b) Sessions that talk about software development and NXT development

c) Sessions that probably have amazing speakers but what really bought it for me was the description (because I cannot remember having heard the speakers before).

Raiser’s Edge NXT: Moving to and Mastering the Next Generation of RE

I always recommend Bill (and not just because he acknowledges me in his book Fundraising with The Raiser’s Edge). He is always a joy to listen to. He not only has an in depth knowledge of Raiser’s Edge 7 and NXT but his ability to convey that information in an interesting and thought provoking way sets him aside from everyone else who works with these products.

Moving forward to meet your growing needs: a solution discussion for Large Raiser’s Edge 7 customers

Other than Jim who I have heard speak, I cannot say that I have heard these speakers before. However this is such a recurring issue among our clients that it is definitely worth addressing the challenges and solutions faced by large organisations.  I would be very interested to hear how NXT, Luminate CRM and BBCRM overcome the limitations that RE7 has faced for larger organisations. I am looking to hear about;

  • how processes have been automated because it is just not practical to enter data manually
  • how systems can be integrated because larger organizations typically do not only have systems from one company
  • and also how organizations can better utilize their volume of data to analyze their fundraising performance.

 

Raiser’s Edge NXT Roadmap

This is always a must. The fact that there is no Raiser’s Edge 7 roadmap is, of course telling that the focus is and always was going to be on NXT. (That being said I do hope to hear about any snippets of information about RE7 – not everybody is ready to move just yet and much of NXT life is still lived through the eyes of a hosted RE7 instance).  The roadmap sessions are often the ones that swing it for Blackbaud. People will go away feeling either elated or underwhelmed. Let’s hope the former.

Bill and Ed’s Excellent Adventure in Raiser’s Edge

I have already sung Bill’s praises but if Bill ever had any competition then it was from Ed. Back in the day of the RE Geeks, I often felt as though I were in the shadow of Bill and Ed whose combined tower of knowledge and eloquence put my own expertise to shame. Aside from the speakers it is also good to have a session format that is a little more informal than many other sessions. The demonstration and discussion format (the demonscussion?) really does work when covering more complex topics requires clear visual aids and clarification.

Integration with Blackbaud SKY API and SKY UX

Build an app using SKY UX and SKY API

I have grouped these two session together because they clearly have a lot of overlap. I can only assume that integration refers to connecting with other systems whereas building an app is enhancing the existing application. Maybe the order of SKY API and SKY UX will also have some bearing on the emphasis each places within their sessions.

I have only heard Dan speak (Integration) at previous sessions and indeed have worked with him during the API discovery process. If his session is anything to go by his past performance then this will surely be a very interesting session. There is so much that can already be done with the Sky API and the team have only just touched the surface. I am excited to see what they have produced in these sessions. Who knows they may even demonstrate our own SKY API Chimpegration product! Now that would definitely be worth viewing.






Importacular






We have always had a product that imports data into The Raiser’s Edge. We would regularly customise it for non-profits so that it would adhere to their business rules and bring in their data in a format that they wanted it to. Then came along the somewhat more well known import tool for the Raiser’s Edge (do I really have to mention names?) and our quiet workhorse was, not quite put down, but put to rest.

In recent years however we have had some requests to bring it back. The reason stems mainly from our Audit Trail and Validatrix clients who regularly use the other well known product but would also like to save all the changes that are made in Audit Trail and prevent data from entering Raiser’s Edge making use of their Validatrix business rules.

We also had a number of third party suppliers of data ask us the best way to get data into RE and could we help them.

So a plan was formed to resurrect our long forgotten import tool, integrate Audit Trail and Validatrix and update it with the best bits from IDLookup and The Mergician. We also did what only we do best. We integrated it directly with other third party data sources.  And so was born Importacular!

We built, for want of a better word, a plug-in architecture so that adding new third party data sources does not mean a release of the new version of the application. Instead we could add new data sources remotely to the application with the end user deciding to activate them and install the client assemblies. Once you were used to the main application, selecting a new data source component would be a breeze. It would follow the same pattern as all the other data sources that you had used previously.

We are really excited about the future for Importacular and being able to help non-profits of all sizes to get their data into Raiser’s Edge without pulling their hair out over Excel and CSV files.






Performance Management using Audit Trail – slides available






For those of you who were at BBCon 2013 (and for those of you who were not) Mohammed Dasser and I presented a session on performance management using the Audit Trail. The slides are available here: http://www.slideshare.net/blackbaud/performance-management-using-audit-trail?from_search=1

If you have any questions about what Mohammed has done then I would love to be able to answer them but to be honest he overwhelmed me with the sophisticated use of Audit Trail and you would be better of asking him! That being said feel free to post your comments about the session or ask directly and I will try and get an answer.