Warning: session_start() [function.session-start]: open(/home/content/30/7423630/tmp/sess_vtc09c8f4fp1m4u7cmaforn1f4, O_RDWR) failed: No such file or directory (2) in /home/content/30/7423630/html/wp-content/plugins/simple-twitter-connect/stc.php on line 33

Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /home/content/30/7423630/html/wp-content/plugins/simple-twitter-connect/stc.php:33) in /home/content/30/7423630/html/wp-content/plugins/simple-twitter-connect/stc.php on line 33

Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/30/7423630/html/wp-content/plugins/simple-twitter-connect/stc.php:33) in /home/content/30/7423630/html/wp-content/plugins/simple-twitter-connect/stc.php on line 33
Developers Archive - TransSwipe - Merchant Services and Credit Card Processing

TransSwipe - Merchant Services and Credit Card Processing

Archive for the ‘Developers’ Category

Embracing Open Source Development

Posted in Blog, Developers, Knowledge Base on November 29th, 2017

At Dwolla, we are big believers in the power of open source software. We believe that using and contributing to open source makes for better designed and safer software, enabling developers to focus on building tools and writing code that are within their specific area of expertise.

When developing open source software, there are some guiding principles we believe in and can share with you, but first let me provide some background.

Value of Open Source

Open source software has helped drive much of the growth and innovation on the internet in the last few decades. It allows for all companies—or even a couple of developers in their garage—to start from a base of software that is free to use, modify, and experiment with. They aren’t tasked with becoming experts in operating systems or network communication; instead, they can focus on the company they want to build and getting to market much sooner.

With open source software, those same developers also know the software or code they are using has been reviewed by their peers, the greater community of developers. Better yet, if they have any questions, they can see the source code for themselves and make any changes they need. This can lead to software that is safer and stronger and gives more power to small companies and independent developers.

Contributing to open source projects

Image result for mojaloop logo

Recently, Dwolla was part of the team contributing to Mojaloop—the Gates Foundation’s open sourced software for creating interoperable payments platforms. As part of this project, we worked on a project designed to be open sourced from the beginning. This was an exciting opportunity to build something that could potentially be a foundation for payments in developing countries and ecosystems and to see how it could grow and change from where we started.

Building software that you know will be completely open sourced calls for a slightly different set of priorities. We knew this code was going to be openly available, so there were a few principles to consider that are different than working on closed source software.

Principles to Consider

Document Rigorously

First, you have to remember that there will be people using the software that wasn’t part of the original project. This means that you can’t assume they know anything, and you have to document, document, and document some more. You must have clear documentation of the software, including what it does, dependencies it requires, and any configuration options that are present.

While working on Mojaloop, we took these principles to heart. All of the individual initiatives have helpful documentation, including README files, API documentation, example projects, and “getting started” guides.

Provide Examples

Providing examples of the software in use is also a great addition. Examples help developers understand how the software is intended to be used, while also providing an excellent place to start with their usage of the open source code.

A project with an extensive set of examples would be Chefspec. Here, developers can find real-world examples of tests you may write for Chef recipes. These real-world examples can serve as foundational starting points.

Create a Roadmap

You should also create a roadmap for the project since the initial open sourcing of a project is rarely the end state. The project will continue to grow and attract users and volunteers; providing a roadmap gives volunteers an idea of what features can be contributed to. A roadmap not only gives others a chance to see what is coming for the project but also encourages feedback and ideas on how to evolve the software.

A great example of a project with a clear roadmap is Mozilla’s Firefox. This roadmap does an excellent job of highlighting the focus of the project for the year, while also giving target dates and build numbers for when features will be available.

Document Architectural Decisions

In working on Mojaloop, we also documented many of the architectural decisions that were made, allowing those who might be new to the project to understand some of the questions and considerations we thought about and discussed during the initial development. We also created a roadmap that clearly outlines what features need further development and some larger features we would love to see the software support in the future.

Extensible Software

In open sourcing, the software you create has the potential to be used by many people in a wide variety of ways, so it’s important to consider extensibility. From the beginning, you have no way of knowing how others will want to use it or what they will specifically want to do with your open source code.

By designing your software to be extensible, you make it easier for users to adapt it for their projects. Extensibility can also allow for people who are less technical in nature to modify the software through plugins or other options for adding features. WordPress’s fantastic plugin support is a great example of this extensibility.

During the development process, we decided that extensibility was going to be a main feature of Mojaloop’s Central Directory. Serving as a core component, the Central Directory allows a user to be found via search through many different pieces of information. There are endless ways to search for someone, from their name to their email address, to their Facebook account. Rather than designing the Central Directory to store all of this information for a user, we wanted to leverage systems that were already securely storing this data and apply it to search. The Central Directory allows for an expanse of systems to be easily plugged in, so that those integrating the open source project can be found quickly and with no personal data requirements.

Mojaloop & Open Source

Mojaloop was a worthwhile project that allowed the team at Dwolla to really get involved with and contribute to open source software at a deeper level. Open source is something that we use and contribute to regularly, and we believe it is one of the things that helps make it easier and safer to build new and exciting platforms, applications, or infrastructures.

Looking to the future, technology will only continue to become more collaborative and transparent. Open source projects are great examples of giving back to the larger community, allowing those who are experts in specific areas to share their knowledge and perspective and ultimately enabling better collaboration.

Interested in learning more? Contact Dwolla

ACH return codes and how to test for transfer failures in Dwolla’s Sandbox

Posted in Blog, Developers on November 21st, 2017

While progress is being made to modernize ACH processes, the U.S. banking system as a whole is not always straightforward. Truthfully, navigating through the world of ACH can be quite daunting at times. With all the complexities, one would assume that developing an application to facilitate bank-to-bank transfers would be an arduous task. However, with Dwolla’s Access APIyou can integrate technology that helps you facilitate bank-to-bank transfers.

When an application involves facilitating the transfer of money to or from a bank account there is the possibility of transaction failures. This possibility is simply a part of transferring funds within the ACH network. Instead of leaving the handling and processing of ACH errors up to the business, the associated financial institution that rejected the transaction will assign an ACH return code when a bank transfer fails.

Let’s walk through how ACH return codes play out in relation to the Access API. Understanding how the API handles transfer failures in the Sandbox will help avoid unnecessary confusion when you deploy into Production.

When you think about how these failed ACH transactions play out in relation to Dwolla’s Access API, you might have a transfer that ends up failing.  By calling the API to retrieve a transfer failure reason, you will receive a receipt of the return code from the financial institution letting you know why the transaction failed. Transfers facilitated by your application can be returned as failed by a financial institution days, or even weeks after they were initially created. For this reason, you will want to account for and test ACH return codes and scenarios prior to going live with your Access API Integration.  

In the Sandbox environment, Dwolla allows you to trigger various bank transfer failures by specifying an “R” code in the funding source name parameter when creating or updating a funding source for an Access API Customer. When a transfer is initiated using a funding source that has an “R” code assigned to its name, a transfer failure event will trigger and the status will update to failed when you simulate bank transfer processing. Let’s walk through two testable scenarios of transfer failures in the Sandbox environment using two Verified Personal Customers.

R01 — Insufficient Funds

Sometimes there is just not enough money available in a bank account. When this happens, a financial institution can return an R01 return code. For this example, let’s consider how a transaction could play out between two verified Customers in the Access API.  Sam (source) has a questionable bank account. He doesn’t have enough money in his account to transfer funds to Darren (destination), but he initiates a transfer anyways. Since Sam doesn’t have enough money to cover the transaction he initiated, the transfer will fail. In the Sandbox, we can test the transfer to see what will happen during this transaction.

Setting the transaction up in the Sandbox

To test the transaction in the Sandbox, you can create or update a funding source by changing the `name` to R01. You can now initiate a transfer with the Source of the transfer being the R01 bank account.

In a transfer scenario between two Personal Verified Accounts, you can expect these webhooks to be fired.

What Happened?

Sam initiated a request to transfer money from his bank account to Darren’s bank account. However, the money was not moved from Sam’s bank account because Dwolla was unable to complete the transaction; there were insufficient funds. In order to avoid an ACH return the next time, Sam will need to initiate another transaction when he does have sufficient funds. 

R03 — No Account/Unable to Locate Account

Now let’s talk R03 codes. Imagine making a mistake when typing, it’s not hard to do. An R03 Return Code can come up for a couple reasons, but one of the more common reasons is when a bank account number or routing number is mistyped when creating a bank funding source.

For instance, when micro-deposits are initiated, this can still be considered a transfer, as funds are being sent to a destination. If an account number was mistyped and micro-deposits were initiated, these will be unable to clear. Let’s imagine a scenario Sam wants to initiate micro-deposits to his new bank account for the purpose of bank funding source verification. If he ends up mistyping his bank account number, the microdeposits will have no place to settle.  Dwolla will receive an R03 return code and will automatically remove the bank funding source.

Set this up in the Sandbox

To test this in the Sandbox, you can create or update a funding source by changing the `name` to be R03. You can now initiate microdeposits to this bank funding source.

In this scenario, you can expect these webhooks to be fired.

What Happened?

Dwolla was unable to clear the microdeposits to Sam’s bank, as it was unable to be found. The bank funding source will be automatically removed. The application will allow Sam to add another bank account and try again.

What Next?

Bank transfer failures are not the only things you need to test for when getting started with the Access API. You can check out our documentation for tips on how to simulate transfer all of our failures in the Sandbox.

With a powerful set of tools at your disposal, testing in the Sandbox is intuitive. Remember that the transfer of funds is simply the movement of data, and utilizing Dwolla’s powerful API’s can facilitate quick and reliable transfers for a variety of scenarios.

Interested in learning more? Contact Dwolla

The value of partner feedback and the new CorrelationID

Posted in Blog, Developers, updates on September 21st, 2017

This blog post comes from John Jackovin, our Senior Product Manager at Dwolla. 


I love partner feedback.

As a product person, I’ve always been wired that way—and you just have to be.

I especially love partner feedback when I’m able to easily identify trends, hearing the same thing over and over (and over).  Why, you might be wondering?

Because it means there is a common need that our Access API Partners share, and my goal is to make sure real partner needs are met—and met quickly.

Even as we develop new features for our Access API product, I’m always listening to what the partner needs. For example, when we added multi-user functionality to the Dashboard, our partners had a large influence in that.

More feedback, more improvements

One particular ask we have been hearing more frequently from our Access API partners is for the ability to create and use a partner-generated identifier on payment transfers and mass payments. The need to create this sort of ID is compelling—if you think about it, the transaction is initiated by the partner and undoubtedly an ID will be generated by the partner, but not until it is sent to Dwolla will the partner know the ID that corresponds to that transaction. This makes the correlation between the two transactions potentially cumbersome, and our partners were looking for a way to follow a transaction throughout the entire process, from initiation to completion.

In order to help simplify the process for our partners, we have created the CorrelationID. The CorrelationID helps improve the traceability of a transaction, between two systems; it allows you to create an ID and use that ID from the point of origin on, instead of waiting for a response from Dwolla to then get an ID.

This CorrelationID is generated by an Access API Partner and can be used throughout the ACH transaction process to identify both single transfers and mass payments. The CorrelationID is also searchable within the API allowing a much more simple interface to and from Dwolla.

Now, partners can generate and use their ID throughout the flow making the correlation process more efficient.

Our product team is excited to see and learn from how our Access API Partners use this feature. Additionally, as we continue to listen to feedback, I look forward to providing more features that help simplify and streamline the ACH transaction processes.

Interested in learning more? Contact Dwolla

Using Carbon Black Rest API for behavior monitoring and process management

Posted in access api, Blog, carbon black, Developers, infosec, REST API on September 14th, 2017

This blog post comes from Dwolla’s Information Security Team. As part of our continuing mission to practice iterative security, we believe in the value of sharing new lessons and best practices that we learn along the way. In this post, our Information Security Analyst walks through a new process he created for better security monitoring.


As a part of our continuous security monitoring practice, Dwolla’s Information Security Team tracks and generates alerts based on an endpoint’s behavior in a number of interesting ways.

For example, if an endpoint user is normally asleep at 11:00pm, but one day we observe a sudden authentication through VPN at 2:00 AM, it would be an abnormal behavior for that user and would generate an event.

Another example related to the topic would be if an endpoint user working on the HR team is running the process like ‘git’, ‘python’ or ‘powershell’. This would cause concern because this is a process that he or she doesn’t need to do and doesn’t normally do; therefore, it would also be considered abnormal behavior and cause for concern.  

Below, we will demonstrate how to analyze processes as we correlate behavioral-based events to gain confidence in the authenticity of associated processes and improve security monitoring for Dwolla’s team.  

Problem Description

Carbon Black is a robust tool that can be used to collect process information from endpoints. In addition to its threat intelligence feeds and alerts, it has a powerful Rest API.  

I wrote an internal tool to detect the abnormal processes running in endpoints, like the examples I described above. Further, to reduce the false positive rate in certain situations, it would be better for us to know if a process is cryptographically signed or not.  

This signature is a simple but powerful indicator to provide insight into if a process is “malicious” or not.  While the signature isn’t a perfect indicator as some advanced exploitation techniques such as DLL Injection can rely on the pedigree of the parent process, it is very helpful when analyzing large amounts of process information.

During my analysis, I found that if the Carbon Black Process and Binary Search API is used directly, you cannot grab the signing information from the response. The following is an example json response for the process searching result.

u'results': [{u'childproc_count': 0,
              u'cmdline': u'/usr/bin/git rev-list --left-right master...origin/master',
              u'comms_ip': -1062698981,
              u'crossproc_count': 0,
              u'current_segment': 0,
              u'emet_config': u'',
              u'emet_count': 0,
              u'filemod_count': 0,
              u'filtering_known_dlls': False,
              u'group': u'MAC Devices Group',
              u'host_type': u'workstation',
              u'hostname': u'host001',
              u'id': u'-8377683872379532238',
              u'interface_ip': 0,
              u'last_update': u'2017-06-23T14:26:19.601Z',
              u'modload_count': 0,
              u'netconn_count': 0,
              u'os_type': u'osx',
              u'parent_md5': u'000000000000000000000000000000',
              u'parent_name': u'Code Helper',
              u'parent_pid': 75463,
              u'parent_unique_id': u'53f564bf-ce84-ce9d-0000-000000000001',
              u'path': u'/usr/bin/git',
              u'process_md5': u'82a7bf2b1f51f1988be571b8176aa545',
              u'process_name': u'git',
              u'process_pid': -1,
              u'processblock_count': 0,
              u'regmod_count': 0,
              u'segment_id': 1,
              u'sensor_id': 31,
              u'start': u'2017-06-23T14:26:19.601Z',
              u'terminated': True,
              u'unique_id': u'8bbc7ce7-aa4c-8c32-0000-000000000001',
              u'username': u'user001'}]


After doing additional research, I found that while I can’t get the signing information from the Rest API directly, it does exist in the backend.

With this information, I started to test using other keywords for searching queries to see if there is any difference in the result, but unfortunately, I still found nothing related to signing information.

After reading more of the query documentation, I found you can search for signing information with query strings like: digsig_publisher, digsig_issues, digsig_subject, digsig_prog_name, digsig_result, digsig_sign_time.

However, this query string won’t inform you if a specific process is signed or not; it could only be used to in searching queries as one of the requirements. Essentially, those “searching queries” can only be used to filter the signed process; you won’t be able to grab signing information from the actual response, but you will be able to track if the process is signed on the queries provided.  

I started to use joint searching queries to grab some signing information about a specific process. This time the new method works and after trying some of the joint searching queries, I found if I use process_md5 and digsig_prog_name together, I can get the result I want.

The search query would become:

'q=process_md5%3A' + <process_md5> + '%20digsig_prog_name%3A' + <process_name> + '&rows=1&start=0&sort='

If the process is not signed, I would get nothing in result, because you can’t get the signed process name with a not signed process md5. On the other hand, if there is any result in the json response, we can say it’s signed.

For not signed process, the response you would get is like the following:

{u'elapsed': 0.00925898551940918,
 u'facets': {},
 u'filtered': {},
 u'highlights': [],
 u'results': [],
 u'start': 0,
 u'tagged_pids': {},
 u'terms': [u'process_md5:3d485deffe2f74d0c15f1cb23086f936',
 u'total_results': 0}

For signed processes, you would still receive the similar result with detailed information like what was shown earlier in the blog post.

No single tool is a solution for comprehensive security monitoring— a variety of data sources and activity are required to get a full picture of behaviors, threats, and countermeasures.  

The Carbon Black API is a rich source of information which can enhance the value of abnormal behavior alerts when queried and integrated into automation and investigative activities.  Cryptographic signing of processes is a common practice with modern operating systems and the use of this feature can help reduce noise and speed analysis efforts.

At Dwolla, security is never done. We take an interactive approach to improvement, and we’ll continue to share new InfoSec and security monitoring insights as we learn.

Interested in joining our team? Visit the careers page now.

Interested in learning more? Contact Dwolla

A Dashboard and Admin for ACH Transfer Integrations

Today, we released an intuitive new dashboard for White Label partners to manage customers, view transaction details, and discover business trends. All information within this interface leverages the Dwolla White Label API powering the partner’s ACH transfer integration, but provides the data in a way that’s easy to manage and act upon from a business operations perspective.

By providing a clear view into the integration data, the new dashboard makes it simpler for partners to provide quality customer support to their users, reconcile every payment, and keep tabs on the heart of their payments integration. We’ve built the admin interface so White Label partners don’t have to jeopardize their own time building, maintaining, and scaling their own DIY dashboard.

Check out Product Hunt to see what others are saying!



On the main dashboard, partners will find beautiful charts and graphs built from the data being collected from within their integration with the Dwolla White Label ACH API. Customer and transaction data will be arranged in a way that provides a straightforward look into the health of the business and makes it easy to establish and analyze business trends over time.



Partners can easily look up and edit customer information from within the dashboard, an important feature for providing quality customer support and communication. Customer service is a critical piece to running a successful business—partners like GOAT have been able to reduce cashout related support tickets by 80% after integrating Dwolla’s bank transfer API.



View transactions received and sent across your platform. Search and view transaction details, or simply use this functionality to assist in your reconciliation processes. Run basic accounting and business operations faster.

Save hours of your own developers’ time by eliminating the need to build a custom view of your White Label integration. Utilize a smarter interface for managing payments—your customers and accounting department will thank you.

Interested in learning more? Contact Dwolla

10 best blog posts from the minds of our developers

Every day on Twitter, Slack, GitHub, or one of the many modern forms of communication, our developers and product designers are approached with interesting and insightful questions. This past year, they’ve turned those insights into blog posts and shared them publicly, breaking down complex ideas into articles that everyone can learn from.  

However, content is fleeting. A massive amount of information is published daily. Reflecting on the value of these technical posts we’ve published, we thought rather than let those insights fade into the vastness of the internet, we’ve pulled together the 10 most popular posts from our developers and product designers. Enjoy.

1. Developing for simplicity when cleverness is the enemy

As our team of developers prepared to launch a new product, they needed to take a step back and consider the value they were looking to add. This post explores that process.

2. APIs and the power of collaboration for innovation

A look at how the relationship and power of APIs has changed over the years, and how the right collaboration could fundamentally change an industry.

3. How we did it, inside the developer portal redesign

In one of the most interesting redesigns in Dwolla’s history, our Vice President of Product explains how we pragmatically approached our developer portal update to satisfy even the pickiest of developers.

4. Fake it as you make it: why fake services are awesome for developers

This post explains why using fake services to test products can be more beneficial and effective than the alternatives, including a real step-by-step example.

5. Building with Microservices and Docker

Two members of our engineering team dive into how we used dockerization and microservices hand-in-hand to build a more scalable system.

6. What I’ve learned in designing for developers

Our brilliant Head of UX explains how she discovered the different Dwolla developer personas to design the best possible developer portal based on direct insights.  

7. Arbalest: Open source data ingestion at scale

Through the lens of open source software, get a pros perspective on the value and process of implementing a scalable data solution—great post for fans of Amazon AWS and Tableau.

8. Ask hard questions, build a better product

A rare chance to get inside the mind of a VP of Product at a fintech company, this blog post discusses how our team approached rethinking the product from the lens of leadership.

9. Cutting the busywork for developers with better design

Our developer portal upgrade was a major initiative (which we talk about in a post mentioned above). In this post, get insight into the specifics of what developers love and hate about interacting with APIs, and how we optimized for this.

10. Building a data platform, embracing events as the atomic building blocks of data

From the mind of one of our most innovative data engineers, this blog post looks at how we structured our data platform to break information down to its simplest most useful form.

To learn more about Dwolla or the API, head to our developer documentation now.

Custom transaction limits and next day transactions for your users

Get paid more quickly with Dwolla NextDayWe understand that not all businesses are created equal and that each has different needs. Whether it’s faster processing times or higher transaction limits, we are here to help you design a payment solution tailored to your business.

For approved partners, as part of a paid white label solution, Dwolla can enable those sending you funds to send up to a custom amount for each transaction and/or have those transactions process the next business day.

This allows users sending to an approved partner to bypass Dwolla’s $5,000 per transaction limit for personal accounts and $10,000 limit for business or nonprofit accounts. This also speeds up transfer times for transactions destined for an approved partner’s Dwolla account to  one business day, instead of the standard 3-4 business day bank transfer processing time.

Partners that benefit from raising the per transaction limit on transactions destined for their account are those that need to regularly move large sums of money, such as investment platforms facilitating transfers from investors to development projects.

Partners that benefit from increasing the standard bank transfer processing times of their users are those that frequently receive many payments, such as property management platforms receiving transfers from tenants each month.

If your platform facilitates high dollar or a large volume of bank transfers, you need a payment platform that doesn’t hinder, but supports the way your business operates. Contact an integration specialist today.

Get started with your own integration

We’ll help you design your ideal payments experience.


Thank you

A Dwolla representative will reach out to you within one business day.


There was an error and your the form was not submitted.

Bank account balance as an API endpoint

Posted in API, API help, Blog, Developers, Dwolla developers, endpoints, Product Updates on April 12th, 2016

We’re making a new feature available for white label customers that lets them ask for users’ permission to check the balance in their bank account.

Dealing with returns is one thing and we can appreciate that but there is a great deal of applications where this can be valuable for a business or developer:

  • Mitigating risks when pre-funding. Some businesses make the choice to pre-fund accounts before the transaction clears. If that is done the balance endpoint can provide a view into the risk the business is really taking.
  • Mitigating risks in trading environments. One of the problems in a trading environment is reconciling what someone says they have when they make a trade, and what they actually have. Many times the accounts for trading purposes are segregated and intended to be sacrosanct but programmatically checking the balance of the associated account that provides liquidity for trades has previously been incredibly hard.
  • Other things we haven’t thought of yet. It’s important from our perspective to give developers and businesses a platform to innovate with. Users of white label software platforms can grant this permission if they find it valuable to do so given the features associated with the platform they’re using.
  • Following the trend that the bank account is transforming into the pre-loaded account. This feature gives software developers the ability to check the bank account balance the way pre-funded account balances are checked, similar to how checking a balance in a Dwolla account works in our V1 APIs.

So how does it actually work?

Once a software application gets the permission from the account holder and the funding source is added to a white label application through instant bank verification, the developer gets a GUID that represents the account. It looks like this:


That is utilized in the /funding-sources/ endpoint to request balance for that authorized funding source:


By adding ?balance=latest to the end of the request the application is requesting the latest balance. Additional data is returned to the API call as a result. Here is a sample of what that additional data looks like:

"balance": {
    "value": "107.28",
    "currency": "USD",
    "lastUpdated": "2016-04-09T00:12:43.527Z"

The full response would look something like this:

  "_links": {
    "self": {
      "href": "https://api.dwolla.com/funding-sources/692486f8-29f6-4516-a6a5-c69fd2ce854c"
    "customer": {
      "href": "https://api.dwolla.com/customers/36e9dcb2-889b-4873-8e52-0c9404ea002a"
  "id": "692486f8-29f6-4516-a6a5-c69fd2ce854c",
  "status": "unverified",
  "type": "bank",
  "name": "Test checking account",
  "created": "2015-10-23T20:37:57.137Z",
  "balance": {
    "value": "107.28",
    "currency": "USD",
    "lastUpdated": "2016-04-09T00:12:43.527Z"

This new feature will be made available to white label customers who are in our custom package.

Think you could make use of this new technology? Drop us a line and we’ll help you get started.

Drop us a line

We’ll help you get started with better payments.


Thank you

A Dwolla representative will reach out to you within one business day.


There was an error and your the form was not submitted.

On-demand Bank Transfers, made easy


In 2011 we released an OAuth API that made it easy for developers to request transfer permissions from their customers with our Dwolla-branded platform.

Today we’re making the ability to bill your customer later available for our Dwolla White Label customers in our v2 API.

It’s called On-Demand Bank Transfers.

Developers using our white label APIs can enable their payers to authorize transfers for variable amounts from their bank account using ACH at a later point in time for products or services delivered. It’s one simple additional step—a quick authorization from the customer when they instantly verify a bank account. This is great for companies like:

  • Cloud computing services. Fees can be different every month, requiring ongoing authorization so a customer can easily pay for a service, and the company can easily bill for the service.
  • Utilities. A water company bill is rarely the same each month. Same with electrical, and gas. The amount collected at the end of each month is usage-based, or metered.
  • Ride sharing or asset sharing platforms. The amount a customer is charged for a ride across town depends on a variety of factors. We make it easy for sharing companies to bill their customers, while reducing the hassle for the end customer on each trip.
  • B2B services that bill on a variable basis. Some orders may require a bank transfer on NET terms and others may be fulfilled once the goods are delivered. Either way, both should be possible.

The instant bank verification and on-demand authorization occur with Dwolla.js to make it incredibly easy to add to your software. It adds an extra step to the bank transfer process when verifying the bank to acquire the permission for the account holder.

On-demand payments from Dwolla

Once you have collected all of the authorizations required for a bank transfer, including the additional authorization from the end user for on-demand bank transfers, your software application kicks off a transaction that looks like this whenever the customer needs to be billed:

    "_links": {
        "source": {
            "href": "https://api-uat.dwolla.com/funding-sources/5cfcdc41-10f6-4a45-b11d-7ac89893d985"
        "destination": {
            "href": "https://api-uat.dwolla.com/customers/C7F300C0-F1EF-4151-9BBE-005005AC3747"
    "amount": {
        "currency": "USD",
        "value": "225.00"
    "metadata": {
        "customerId": "8675309",
        "notes": "Payment for January 2016"

As usual, there are no per transaction fees for either party in the transaction, and with our white label services your brand is front and center. If you’re a developer and have more questions, head to our API Support discussion board and post your questions. Our engineering team regularly reads and responds on the board.

The cost and complexity to make bank transfers better isn’t unknown and it affects new fintech companies, large established business, and even big VC’s… We all have the same problem.

We didn’t either and that’s why our team has spent the last 5 years focused on bank transfers. It’s why we feel on demand bank transfers are so incredibly valuable for our customers.

There is a better way and we’re excited to be a part of building the future making bank transfers easier for businesses and developers.

Contact us to enable for your application

Fake it as you make it: why fake services are awesome for developers


This blog post comes from Shea Daniels, a developer here at Dwolla. When Shea isn’t busy building awesome new things, you can find him out for a run. api-blue

It’s often said in life that we “stand on the shoulders of giants.” This rings especially true now that we’re in an era of abundant open source software and SaaS providers. Now, more than ever, we build applications by relying on tools and services that others have made. This may even be standard practice inside your own organization as other teams deliver functionality through a microservices architecture.

Building software by composing services is extremely powerful, but it can still be a rocky road. Several factors can make it difficult to write and test your code:

  • Complex scenarios may not be easy to test with real data
  • Running elaborate business logic may consume resources
  • Sandbox environments may not exist for 3rd party APIs

Just fake it

So what can be done to mitigate these issues? The answer is to fake it while you’re making it!

You can see this in everyday life. Whenever the real thing is too expensive or impractical, we sub it out with something fake as a stand in—think movie props or mannequins for tailors. This is also a fairly common engineering practice; my favorite examples are the boilerplate capsules used to evaluate rockets and other space hardware.

In the software world, if you practice TDD you should be familiar with the use of test doubles (mocks, fakes, and stubs) for dependencies in your unit testing code. Used instead of the real implementations of objects, fake dependencies isolate the code under test by providing predictable results given certain input. This isolation is useful for tracking down issues and fully exercising your code without complicated setup.

The same concept can be applied when developing an integration with a third party service. By building a fake copy of the web service, you gain the same advantages of isolation and repeatability as you test your application. This is especially useful if the service you’re depending on is being developed in tandem and has yet to be fully implemented.

There are some existing tools for quickly standing up your own fake services, such as Nock and Frock. But with Node.js and a few NPM packages, it’s easy enough to build your own from scratch.

In this post we’ll include:

  • An actual example
  • How to get started
  • Possible scenarios
  • Some of the downsides

A real example

Let’s break down a real example that Dwolla has recently open sourced: Nodlee. You can see it in action by checking out our instant bank account verification demo—here it’s used as a backing service.


Getting started

Nodlee is a simple web server written in Javascript and run via Node.js. It depends on the following NPM packages, which you can see in the package.json file:

minimist – argument parser library

If you haven’t used Node or express before, there are a ton of great tutorials, or you can read through the Nodlee source code to get a feel for it. The readme has a lot of great info and the code entry point is app.js.


The first thing to do when building out a fake service is to look at the documentation for the real API and experiment with the service to discover how it works. With that knowledge, you can figure out which endpoints need to be mocked and what the responses should look like.

For a simple example, here’s the Nodlee health check endpoint response: health.js

module.exports = function (req, res) {
  res.json({ healthy: true });

These responses can be as simple or as complicated as needed. If returning the same canned response every time isn’t enough, consider scanning the request for sentinel values that you define. Then you can use those values to decide which data to send back. You can even use a templating language like Handlebars for generating your responses if you want to get swanky.

Complex scenarios

For the instant account verification product we were building, even sentinel values and templating weren’t quite enough. We found that we were constantly editing multiple files in the fake service code to set up complex scenarios.

The first step to making this easier was to consolidate all of the possibilities that determined a particular scenario into a single list of options in the code: scenario.js

module.exports = function Scenario() {

	this.contentServiceId = 0;
	this.isMfa = true;
	this.mfaTimeout = false;
	this.mfaPass = true;
	this.mfaTypes = ['q'];
	this.refreshesSet = 1;
	this.refreshesLeft = 1;
	this.errorType = 0;
	this.accounts = [
			routingNumber: '222222226',
			accountNumber: '5031123001',
			accountType: 'CHECKING',
			accountHolder: 'John Wayne',
			accountName: 'Your Account #1',
			balance: '1000'

This object can then be checked in all of the service endpoints in order to determine the appropriate response. With this in place, developers could set up the flow to behave however they wanted just by editing this single file.

Sentinel values on steroids

We have local development covered now, but what about testing in sandbox environments where we can’t edit the fake service code? Not only that, but what if we wanted coverage of our flow with automated UI tests (e.g. Robot Framework)?

What we need now is a service with a memory longer than a single web request and a way for automated tests to trigger whatever scenario is needed. This is where the minimist and node-cache NPM packages come into play.

With minimist, we are able to take certain inputs in a web request and treat them as if they were command line interface options. Those options can then be translated into order to set the properties of the Scenario object we’ve just discussed: 

var scenarioData = optionsParser.parse(req.body.someInput)


exports.parse = function(options) {

	var data = new Scenario();

	if (options.indexOf("-") < 0) {
		return data;

	var args = parseArgs(options.split(' '));

	if (args['nomfa'])
		data.isMfa = false;


	return data;

Now that we have the options set for the scenario we want, we use node-cache to persist it across web requests: scenarioManager.js

var cache = new NodeCache({ stdTTL: 900, checkperiod: 300 });

exports.set = function(...) {


	cache.set(userSessionToken, scenarioData);
	return scenarioData;

Now we can use the cache to access the scenario that’s currently being tested at any point we need to build a response: getMfaResponse.js:

module.exports = function(req, res) {

    scenarioManager.get(req.body.userSessionToken, function(scenarioData) {

        if (!scenarioData.mfaTypes) {
        } else if (scenarioData.mfaTypes.length < 1) {
        } else {

The downsides

As with anything, fake services are not a silver bullet. There are a few caveats to keep in mind:

  • Are you sure you understand the real service well enough to build a fake version of it?
  • Watch for edge cases where you may not be duplicating the behavior of the real service
  • If you do find edge cases, be sure to cover them with appropriate tests; manual testing with your fake service is not a replacement for good test coverage with unit/integration tests
  • Plan for regular maintenance of your fake to keep up with any changes in the interface or behavior of the API you depend on
  • Using a fake does not relieve you from the task of running your code against the genuine article

The last bullet point is important since there’s a large difference between “should work” and “actually works.” At some point in your workflow you’ll need to test the full production stack!


Here at Dwolla we’re committed to making developers’ lives easier by providing a great API for moving money with ACH transfers. We’ve found the concept of fake services to be invaluable in making this happen. If you found this useful, please share this article and comment with your own experiences. Happy building!

Check out the documentation

©2018 TransSwipe


Warning: Unknown: open(/home/content/30/7423630/tmp/sess_vtc09c8f4fp1m4u7cmaforn1f4, O_RDWR) failed: No such file or directory (2) in Unknown on line 0

Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0