This is the key information from a current project to set up the Data API - for the single purpose of data transfer between the FileMaker Server and FileMaker Go.
Takeaways.
The first takeaway is technical information regarding stetting up the Data API for use in a Sync solution. There are many other blogs regarding the Data API itself - here you will find solution focused functions, code and workflow examples.
The second takeaway for this blog is for the reader to take the time and consideration for User Research and Technical Options for each function they design and develop, to ensure the best opportunity to deliver the correct solution to the Client and the End Users.
Project Checklist.
As a method of my own productivity, I create To Do Lists or Bullet Points for discussions - with points that evolve as I create the solution - here's the Checklist for this project.
User Requirement
Technical Requirement
Authentication Requirement
Location Requirement
Offline Requirement
UAT Checklist
Define your sync model
Code Options Research
Data API Publishing
Enabling the Data API
Data API Documentation References
FileMaker Server Data API calls
Authentication and Tokens
Calling Data API records via Insert URL
Working with different data types
Container Data Up and Down
Record Data Structure
User Requirements.
Auditors have the need to work 'offline' to create Safety Audits on Personnel, Plant and Locations. The data required to select the correct auditee is sourced from various tables of data, with extra data tables created on the fly and the suite of templates for the Audits.
This data is continually updated live on the server, with updates 'published' every hour.
Technical Requirements.
Data formats coming from (and to) the FileMaker Server include text, numbers, dates and container data - in this case - via Base64 encoding of the binary data - more on that later.
It's worth defining - if 'true sync' is required - where records are created on both the server & client are required to be shared, with multiple server users and multiple iPad users.
It's also worth defining - for any 'sync' requirement - if full CRUD is required, in or out.
In this case, there are many opportunities for the hosted data to have records Created, Updated - and on the rare occasion - Deleted. These updated records are required to be available to the iPad, and have the iPad push new Audit records to the Server.
Authentication Requirement.
An authentication model required six different levels of data permission, with a pool of over 100 auditors on iPads, with different users only requiring access to their group's Audits.
Location Requirement.
It's also worth considering if users are spread across different time zones.
Offline Requirement.
A key component of the requirement is to receive the published data from the server while on the cellular network, or while on the company wifi - the iPads are set up with an APN for the Company network - to allow the Audits to be created and completed offline - and then to be 'synced' back to the FileMaker Server, when back on the cellular network or the company wifi.
Requirement Checklist.
Back to our client's requirements, here's the bullet point Checklist:
Deliver the initial files to the iPad
Authenticate to FileMaker Server
Ensure iPad has connection to FileMaker Server
Check FileMaker Server for latest data
Inform User of latest data
Refresh (replace) latest data
Refresh (replace) latest Templates
Collect URL calls from Audits for syncing back
Syncing back Audit data to FileMaker Server
Update iPad FileMaker Go files with latest files
Research Technical Options.
As software developers, we are here to solve problems. And ensure we don't create further problems - let's discuss this later. One of the key components of today's world is that although we are creating bespoke custom apps, we are solving problems that other's have already approached, and have shared their insights and/or methods via blogs.
Define your 'sync' model.
It's worth making this offical Claris article - Database synchronization - an overview of approaches - part of your research when defining 'your' sync requirements.
When researching and defining your own 'sync' solution, ensure your decision is not just based on taking an 'off-the-shelf' solution for 'sync' that does not correctly fit your needs - which is the real reason why we develop bespoke solutions - and end up compromising your results, due to not correctly researching the user needs, and the toolkit available to you.
Example of a 'Sync' Blog.
Here's an excellent 2013 blog from John Sindelar - FileMaker Sync: what we’ve learned making mobile apps. Take the time to read John's approach, and particular his context for mobile apps and his shift in thinking when developing a distributed app.
Please ensure you evaluate your requirements before deciding to apply an existing 'sync' framework - which may sound like this contradicts John's warning on building your own code - my warning is purely to fully understand the requirements 'before' making this decision.
Code Options Research.
With the advent of the FileMaker 'native' Data API, and this client's planned update to FileMaker 19, the time had come to review the 'syncing' code of the iPad solution.
I can recommend - or should I say I 'strongly' recommend - reviewing the design of your code, periodically. The trigger can be a new version of software or an adjustment of the user requirements.
There's a huge benefit of finding the right existing code, as our community is very open to sharing solutions - as long as you are clear to the requirement - and the existing code is the best fit - without compromise.
l'll make this point again - this is why we build bespoke solutions - our clients needs are unique and the FileMaker code is customisable.
Back to research - there are other options for 'sync' - and I encourage the developer to research and have a clear understanding of the strengths and weakness of each product.
Enabling the Data API.
Setting up the Data API is as simple as turning on the Service in the Connectors of the FileMaker Server Admin Console. As always, I recommend setting up a TEST FileMaker Server, rather than making un-tested changes to your LIVE Environment.
Deliver the initial iPad FileMaker Go files.
A simple method of placing the FileMaker files on the root level of the IIS Server, and have the iPad client enter the URL and download the file to FileMaker Go. This can be via the Company Mobile Device Management (MDM), or even an email with the link for download.
The start up script of the file first checks to see if the file is the latest, then downloads the latest file to the iPad.
Refresh the iPad Client Data.
Due to this 'non-full syncing' requirement, - as the developer and designer of the solution -you have a few choices.
Option 1.
The iPad client script queries the API, and the record set is returned in .json. Each record in the Server feed is checked against the Client, and records are updated. Repeat for each table of data.
Working with a small data set is fine, once record count and tables are increased, this process can be extensive, and time consuming.
Option 2.
An other option is to download a fresh file for each table, delete all the records for that table, and import a fresh set of records.
Not as exhaustive as comparing each record, although again, as each table is added, the time required increases.
Option 3.
Replace the entire data file on the iPad, with a fresh file from the server. Depending on your data structure, this can be the whole solution, or a separate data table.
The latest file on the Server can be retrieved either by the Data API, or natively as a FileMaker Network client - remembering that the local file is required to be closed before it can be replaced.
Solution.
In comparing Option 2 and Option 3, the data Refresh process was reduced from 120 seconds to just under 10 seconds.
Data API Publishing.
It's time to start investigating the Data API - if you have not already - either for publishing your data - this includes FileMaker clients - and also transferring data to the Server.
And if you have made the jump to FileMaker 19 - the Data API can return data in .json from within the FileMaker Client itself - although this is an article for an other day.
Data API Documentation References.
There are plenty of guides for preparing your existing files for the Data API - this blog and 10 minute video by Mason Stenquist from DB Services has an added benefit regarding Postman and FileMaker - and I found that this FileMaker Data API Demo provided by Darren Burgess from Beezwax is a good place to start.
That said, the use of Button 'Parameters' where .json Elements are constructed adds an extra layer of complexity, especially for the Developer new to the Data API and/or new to .json.
Not to take you too far out of your FileMaker comfort zone - and this is what we will be doing - it's also well worth setting up Postman as part of your development toolkit.
This is the added benefit from the DB Services blog, the inclusion of the Postman Data API collection, and is referenced in the video.
Based on the above references, and any others you care to view and learn, you will have the base knowledge to be interacting with the Data API like a long time Pro.
JSON Formatting Tool.
I find this JSON Path Finder website really helpful when creating json and also when you need to extract a node or data value from the json.
FileMaker Server Data API calls.
There are a few structural methods for constructing the calls the Data API, along with a number caveats or dare I say gotchas.
And all these code options are without plugins, as FileMaker Go is the main client platform.
Authentication and Tokens.
With all calls to the Data API, we have a URL and the cURL options.
To take the Postman Collection example, here the URL for Authentication:
https://{{server}}/fmi/data/v1/databases/{{database}}/sessions
Set up your variable at the top of your scripts, and here's an example of the URL variable:
$Protocol & $Server & "/fmi/data/v1/databases/" & $Database & "/sessions"
The $Protocol Variable may be a given, although we should all be doing https:// these days.
The advantages of an internal certificate on your FileMaker Server are numerous.
Here's the variables for the Authentication, before we set up the cURL options:
Set Variable [ $_authentication_json; Value:"{}" ]
Set Variable [ $_authorization_string; Value:Base64EncodeRFC ( 3548 ; $Username & ":" & $Password ) ]
Now for the FileMaker cURL options for Authentication:
"-X POST" &
" -D $_response_headers " &
" --header " & Quote ( "Content-Type: application/json" ) &
" --header " & Quote ( "Authorization: Basic " & $_authorization_string ) &
" --connect-timeout 10" & "\" " &
" -d @$_authentication_json"
The default timeout for the Data API is 60 seconds, this connect-timeout line reduces the timeout to 10 seconds.
Call your Insert from URL script step - remembering to select to 'Auto Encode URL'.
Insert from URL [ $_json_result; $URL; cURL options: $_curl_options ]
Store the returned token in a Field, Local or Global Variable, or call a new one each time.
The tokens are per file, and expire after 15 minutes, after the last call, therefore calling the server every 14 minutes keeps the token alive.
Set Variable [ $_api_token; Value:JSONGetElement ( $_json_result ; "response.token" ) ]
Calling Data API records via Insert URL.
Once we have our Token, this becomes part of the request for records.
URL for returning 'All' records
$Protocol & $Server & "/fmi/data/v1/databases/" & $Database & "/layouts/" & $Layout & "/records"
cURL options
"-X GET " &
"--header " & "\"" & "Content-Type:application/json" & "\" " &
"--header " & "\"" & "Authorization: Bearer " & $_api_token & "\" " &
"--header " & "\"" & "Content-Length: 0" & "\" " &
"--show-error" &
"-d @$data "
In this case, the Data API layout includes container data in Base64 format.
Without the Content-Length option, the API returns an error.
Which can throw the Developer, as Postman does not return this same error.
This error is due to the size of the container exceeding the size limit.
Here's an opportunity experiment on how you construct your code.
Which option do you find more readable, and easier to edit:
"--header " & "\"" & "Content-Length: 0" & "\" "
"--header \"Content-Length: 0\" "
"--header " & Quote ( "Content-Length: 0" )
Personally, I find escaping to insert quotes tiresome, and harder to read and edit.
URL for returning 'found set' of records
$Protocol & $Server & "/fmi/data/v1/databases/" & $Database & "/layouts/" & $Layout & "/_find"
cURL options
"-X POST " &
"--header " & "\"" & "Content-Type:application/json" & "\" " &
"--header " & "\"" & "Authorization: Bearer " & $_api_token & "\" " &
"-d @$_data "
@$_data
JSONFormatElements (
JSONSetElement ( "{}" ;
"query" ;
JSONSetElement ( "[]" ; 0 ;
JSONSetElement ( "{}" ; "Groups" ; "" & TableData::PrivilegeSet & "" ; JSONString
) ; JSONObject
) ; JSONArray
)
)
You'll notice we've setting the value to search 'Groups' from field data.
Working with different data types.
The Data API supports fields types such as text, number, date, timestamp and containers.
Date and Timestamps.
The first consideration is the behaviour that Date and Timestamp fields are mm/dd/yyyy.
This is regardless of the file or operating system preference for Date format.
There are many of us - around the world - that adhere to dd/mm/yyyy.
It's also worth considering timezones in regard to comparing Timestamps.
Users in a different timezone will have an inaccurate comparison to the Server Timestamps for ensuring the iPad data is current.
Where possible, ensure each table - not only has a Text UUID - has a ModificationUTC number field for each record.
Container Data Up and Down.
As previously mentioned, binary data can be converted to Base64 and transferred as text fields via the Data API.
To download the container data to the iPad, add the container field on the Data API layout, and the data returned is the URL to download the file - directly - without being included in the data quota.
As an example, the following container data field is called 'File' and here's the code, where we are looping through the records, where $i is the element required:
Set Variable $_data = JSONGetElement ( $_api_result ; "response.data[" & $i & "].fieldData" )
From this record, we can then extract the data from the Container field:
Set Variable $_file =
JSONGetElement ( $_data ; "File" )
which returns the URL to download the File:
https://<server>/Streaming_SSL/MainDB/2932827DA49A1614F2233E51D50479709D4F7B918376B9B33927EA1B2D06884B?RCType=EmbeddedRCFileProcessor
Now, here's a gotcha - for the 'Insert from URL' script step - where the URL is the $_file variable, ensure you include this cURL Option"
"-L --cookie-jar $myjar"
Otherwise, you will receive a .txt file, with this content:
<html> <head><title>Moved Temporarily</title></head> <body> <h1>302 Moved Temporarily</h1> </body> </html>
And one more gotcha - you'll probably need to convert the filename of the container data, as it's currently the long string you see as part of the URL - try this code:
Base64Decode ( Base64Encode ( $_data ) ; $FileName )
To upload the container data from the iPad, the record must pre-exist - so therefore not part of the initial fields in the cURL payload when creating a new record.
Again, Container data can be converted to Base64 and upload as part of the new record payload. The decisions is obviously to stay native and make two calls, or Base64 the Container, and perform one Data API call.
Record Data Structure.
There are a number of options for collecting the set of fields - before collecting the records - for the payload for the transfer to the Data API.
It's worth taking the time to learn how to format a correct .json element. It's then worth finding a solution to auto-create the .json element from a list of fields.
Have a look at this custom function from Jonathan Mickelson - JSONFromFields.
Combine this with the native function to list the fields on a layout, and we have a clean result. And while we are at it, include the JSON Function to format the .json
JSONFormatElements (
"{\"fieldData\":" & ¶ &
JSONFromFields ( "" ; FieldNames ( "<FileName>" ; "<LayoutName>" ) ; "" ) & ¶ &
"}"
)
Again, there are other methods, this is just one, and now we move onto the next step.
Offline Sync Record Collection.
This client's requirement is the function to complete multiple offline Audits, then to be 'synced' or uploaded to the server, once back online.
The Audit itself can not be synced at the time of completion. This is a key requirement.
Most sync frameworks expect the data to be in a single file, with multiple tables and then to sync records in that context, back to the server.
As a solution, a URL table is created in the main delivery file, with a number of fields:
URL
cURL
Data
Submitted
Other key fields - for any table - such as UUID, Created & Modified Flags, even an error field.
There are multiple options in sending new records to the Hosted Solution on the Server.
Option 1.
Within the structure of the Data API, one URL record is required per record to be synced.
With this data structure, regardless of the fields in each of the five tables for the audits, the records for 'sync' can be collected in the Delivery file, for when back on the network.
It's worth including a count of related records in the parent record, as a flag that all records have made the journey back to the Server.
The process also needs to include the function to validate the data to be sent back 'before' being queued - as records sent and created on the Server - can not be called back.
Option 2.
The data collected in the URL table can form the structure of the data to be passed to the script on the Server.
The data being passed back to the Server can be multiple records, and as these records are send together, if one record arrives, all records arrive.
It's also worth considering that any failed data validations would be required to be passed back to the client, with options to roll back the created records on the Server.
Solution.
In this case, Option 1 was selected, to reduce the complexity of the scripts and code involved to sent the Audits to the Server.
This decision may be weighted against Option 1 requiring approximately 60 records to be synced to the Server, per Audit.
Conclusion.
I hope you have enjoyed this journey from concept to code, with consideration to design patterns focused on the client, and also the developer code.
Free free to reach out with any questions or comments . . .
Comentarios