Planet Drupal

Drupal Mountain Camp: Drupal Mountain Camp 2019 - Open Source on top of the World - Davos, Switzerland, March 7-10

6 days 16 hours ago
Drupal Mountain Camp 2019 - Open Source on top of the World - Davos, Switzerland, March 7-10 admin Thu, 01/10/2019 - 08:41 Preview


Drupal Mountain Camp brings together experts and newcomers in web development to share their knowledge in creating interactive websites using Drupal and related web technologies. We are committed to unite a diverse crowd from different disciplines such as developers, designers, project managers as well as agency and community leaders.

Keynotes The future of Drupal communities

For the first keynote, Drupal community leaders such as Nick Veenhof and Imre Gmelig Meijling will discuss about successful models to create sustainable open source communities and how we can improve collaboration in the future to ensure even more success for the open web. This keynote panel talk will be moderated by Rachel Lawson.

Drupal Admin UI & JavaScript Modernisation initiative

In the second keynote Matthew Grill, one of the Drupal 8 JavaScript subsystem maintainers, will present about the importance and significance of the Admin UI & JavaScript Modernisation initiative and Drupal’s JavaScript future.


In sessions, we will share the latest and greatest in Drupal web development as well learn from real world implementation case studies. Workshops will enable you to grow your web development skills in a hands-on setting. Sprints will teach you how contributing to Drupal can teach you a lot while improving the system for everyone.

Swiss Splash Awards

As a highlight, the Swiss Splash Awards will determine the best Swiss Drupal web projects selected by an independent jury in 9 different categories. These projects will also participate in the global Splash Awards at DrupalCon Europe 2019.


Drupal Mountain Camp takes place at Davos Congress. As tested by various other prominent conferences and by ourselves in 2017, this venue ensures providing a great space for meeting each other. We are glad to be able to offer conference attendees high quality equipment and flawless internet access all in an inspiring setting. Davos is located high up in the Swiss alps, reachable from Zurich airport within a beautiful 2 hours train ride up the mountains.

The camp

The Drupal Mountain Camp is all about creating a unique experience, so prepare for some social fun activities. We’ll make sure you can test the slopes by ski and snowboard or join us for the evening activities available to any skill level such as sledding or ice skating.


Drupal Mountain Camp is committed to be a non-profit event with early bird tickets available for just CHF 80,- covering the 3 day conference including food for attendees. This wouldn't be possible without the generous support of our sponsors. Packages are still available, the following are already confirmed: Gold Sponsors: MD Systems,, Amazee Labs. Silver:, Gridonic, Hostpoint AG, Wondrous, Happy Coding, Previon+. Hosting partner:

Key dates
  • Early bird tickets for CHF 80,- are available until Monday January 14th, 2019

  • Call for sessions and workshops is open until January 21st, 2019

  • Selected program is announced on January 28th, 2019

  • Splash Award submissions is open until February 4th, 2019

  • Regular tickets for CHF 120,- end on February 28th, 2019 after that late bird tickets cost CHF 140,-

  • Drupal Mountain Camp takes place in Davos Switzerland from March 7-10th, 2019

Join us in Davos!

Visit or check our promotion slides to find out more about the conference, secure your ticket and join us to create a unique Drupal Mountain Camp 2019 - Open Source on top of the World in Davos, Switzerland March 7-10th, 2019.

Drupal Mountain Camp is brought to you by Drupal Events, the Swiss Drupal Association formed striving to promote and cultivate the Drupal in Switzerland.

Virtuoso Performance: Drupal file migrations: The s3fs module

1 week ago
Drupal file migrations: The s3fs module mikeryan Wednesday, January 9, 2019 - 01:56pm

A recent project gave me the opportunity to familiarize myself with the Drupal 8 version of the S3 File System (s3fs) module (having used the D7 version briefly in the distant past). This module provides an s3:// stream wrapper for files stored in an S3 bucket, allowing them to be used as seamlessly as locally stored public and private files. First we present the migrations and some of the plugins implemented to support import of files stored on S3 - below we will go into some of the challenges we faced.

Our client was already storing video files in an S3 bucket, and it was decided that for the Drupal site we would also store image files there. The client handled bulk uploading of images to an "image" folder within the bucket, using the same (relative) paths as those stored for the images in the legacy database. Thus, for migration we did not need to physically copy files around (the bane of many a media migration!) - we "merely" needed to create the appropriate entities in Drupal pointing at the S3 location of the files.

The following examples are modified from the committed code - to obfuscate the client/project, and to simplify so we focus on the subject at hand.

Image migrations Gallery images

In the legacy database all gallery images were stored in a table named asset_metadata, which is structured very much like Drupal's file_managed table, with the file paths in an asset_path column. The file migration looked like this:

id: acme_image source: plugin: acme process: filename: plugin: callback callable: basename source: asset_path uri: # Construct the S3 URI - see implementation below. plugin: acme_s3_uri source: asset_path # Source data created/last_modified fields are YYYY-MM-DD HH:MM:SS - convert # them to the classic UNIX timestamps Drupal loves. Oh, and they're optional, # so when empty leave them empty and let Drupal set them to the current time. created: - plugin: skip_on_empty source: created method: process - plugin: callback callable: strtotime changed: - plugin: skip_on_empty source: last_modified method: process - plugin: callback callable: strtotime destination: plugin: entity:file

Because we also needed to construct the S3 uris in places besides the acme_s3_uri process plugin, we implemented the construction in a trait which cleans up some inconsistencies and prepends the image location:

trait AcmeMakeS3Uri { /** * Turn a legacy image path into an S3 URI. * * @param string $value * * @return string */ protected function makeS3Uri($value) { // Some have leading tabs. $value = trim($value); // Path fields are inconsistent about leading slashes. $value = ltrim($value, '/'); // Sometimes they contain doubled-up slashes. $value = str_replace('//', '/', $value); return 's3://image/' . $value; } }

So, the process plugin in the image migration above uses the trait to construct the URI, and verifies that the file is actually in S3 - if not, we skip it. See the Challenges and Contributions section below for more on the s3fs_file table.

/** * Turn a legacy image path into an S3 URI. * * @MigrateProcessPlugin( * id = "acme_s3_uri" * ) */ class AcmeS3Uri extends ProcessPluginBase { use AcmeMakeS3Uri; /** * {@inheritdoc} */ public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) { $uri = $this->makeS3Uri($value); // For now, skip any images not cached by s3fs. $s3_uri = \Drupal::database()->select('s3fs_file', 's3') ->fields('s3', ['uri']) ->condition('uri', $uri) ->execute() ->fetchField(); if (!$s3_uri) { throw new MigrateSkipRowException("$uri missing from s3fs_file table"); } return $uri; } }

The above creates the file entities - next, we need to create the media entities that reference the files above via entity reference fields (and add other fields). These media entities are then referenced from content entities.

id: acme_image_media source: plugin: acme process: # For the media "name" property - displayed at /admin/content/media - our # first choice is the image caption, followed by the "event_name" field in # our source table. If necessary, we fall back to the original image path. name: - # Produces an array containing only the non-empty values. plugin: callback callable: array_filter source: - caption - event_name - asset_path - # From the array, pass on the first value as a scalar. plugin: callback callable: current - # Some captions are longer than the name property length. plugin: substr length: 255 # Entity reference to the image - convert the source ID to Drupal's file ID. field_media_image/target_id: plugin: migration_lookup migration: acme_image source: id # Use the name we computed above as the alt text. field_media_image/alt: '@name' # We need to explicitly set the image dimensions in the field's width/height # subfields (more on this below under Challenges and Contributions). Note that in # the process pipeline you can effectively create temporary fields which can be # used later in the pipeline - just be sure they won't conflict with # anything that might be used within the Drupal entity. _uri: plugin: acme_s3_uri source: asset_path _image_dimensions: plugin: acme_image_dimensions source: '@_uri' field_media_image/width: '@_image_dimensions/width' field_media_image/height: '@_image_dimensions/height' caption: caption destination: plugin: entity:media default_bundle: image migration_dependencies: required: - acme_image Other images

The gallery images have their own metadata table - but, there are many other images which are simply stored as paths in content tables (in some cases, there are multiple such path fields in a single table). One might be tempted to deal with these in process plugins in the content migrations - creating the file and media entities on the fly - but that would be, well, ugly. Instead we implemented a drush command, run before our migration tasks, to canonicalize and gather those paths into a single table, which then feeds the acme_image_consolidated and acme_image_media_consolidated migrations (which end up being simpler versions of acme_image and acme_image_media, since "path" is the only available source field).

function drush_acme_migrate_gather_images() { // Key is legacy table name, value is list of image path columns to migrate. $table_fields = [ 'person' => [ 'profile_picture_path', 'left_standing_path', 'right_standing_path', ], 'event' => [ 'feature_image', 'secondary_feature_image', ], 'subevent' => [ 'generated_medium_thumbnail', ], 'news_article' => [ 'thumbnail', ] ]; $legacy_db = Database::getConnection('default', 'migrate'); // Create the table if necessary. if (!$legacy_db->schema()->tableExists('consolidated_image_paths')) { $table = [ 'fields' => [ 'path' => [ 'type' => 'varchar', 'length' => 191, // Longest known path is 170. 'not null' => TRUE, ] ], 'primary key' => ['path'], ]; $legacy_db->schema()->createTable('consolidated_image_paths', $table); drush_print('Created consolidated_image_paths table'); } $max = 0; foreach ($table_fields as $table => $field_list) { drush_print("Gathering paths from $table"); $count = 0; $query = $legacy_db->select($table, 't') ->fields('t', $field_list); foreach ($query->execute() as $row) { // Iterate the image path columns returned in the row. foreach ($row as $path) { if ($path) { $len = strlen($path); if ($len > $max) $max = $len; $path = str_replace('//', '/', $path); $count++; $legacy_db->merge('consolidated_image_paths') ->key('path', $path) ->execute(); } } } // Note we will end up with far fewer rows in the table due to duplication. drush_print("$count paths added from $table"); } drush_print("Maximum path length is $max"); } Video migrations

The legacy database contained a media table referencing videos tagged with three different types - internal, external, and embedded. "Internal" videos were those stored in S3 with a relative path in the internal_url column; "external" videos (most on client-specific domains, but with some Youtube domains as well) had a full URL in the external_url column; and "embedded" videos were with a very few exceptions Youtube videos with the Youtube ID in the embedded_id column. It was decided that we would migrate the internal and Youtube videos, ignoring the rest of the external/embedded videos. Here we focus on the internal (S3-based) videos.

id: acme_video source: plugin: acme_internal_video constants: s3_prefix: s3:// process: _trimmed_url: # Since the callback process plugin only permits a single source value to be # passed to the specified PHP function, we have a custom plugin which enables us # to pass a character list to be trimmed. plugin: acme_trim source: internal_url trim_type: left charlist: / uri: - plugin: concat source: - constants/s3_prefix - '@_trimmed_url' - # Make sure the referenced file actually exists in S3 (does a simple query on # the s3fs_file table, throwing MigrateSkipRowException if missing). plugin: acme_skip_missing_file fid: # This operates much like migrate_plus's entity_lookup, to return an existing # entity ID based on arbitrary properties. The purpose here is if the file URI # is already in file_managed, point the migrate map table to the existing file # entity - otherwise, a new file entity will be created. plugin: acme_load_by_properties entity_type: file properties: uri source: '@uri' default_value: NULL filename: plugin: callback callable: basename source: '@uri' destination: plugin: entity:file

The media entity migration is pretty straightforward:

id: acme_video_media source: plugin: acme_internal_video constants: true: 1 process: status: published name: title caption: caption # The source column media_date is YYYY-MM-DD HH:DD:SS format - the Drupal field is # configured as date-only, so the source value must be truncated to YYYY-MM-DD. date: - plugin: skip_on_empty source: media_date method: process - plugin: substr length: 10 field_media_video/0/target_id: - plugin: migration_lookup migration: acme_video source: id no_stub: true - # If we haven't migrated a file entity, skip this media entity. plugin: skip_on_empty method: row field_media_video/0/display: constants/true field_media_video/0/description: caption destination: plugin: entity:media default_bundle: video migration_dependencies: required: - acme_video

Did I mention that we needed to create a node for each video, linking to related content of other types? Here we go:

id: acme_video_node source: plugin: acme_internal_video constants: text_format: formatted url_prefix: s3_prefix: s3://image/ process: title: title status: published teaser/value: caption teaser/format: constants/text_format length: # Converts HH:MM:SS to integer seconds. Left as an exercise to the reader. plugin: acme_video_length source: duration video: plugin: migration_lookup migration: acme_video_media source: id no_stub: true # Field to preserve the original URL. old_url: plugin: concat source: - constants/url_prefix - url_name _trimmed_thumbnail: plugin: acme_trim trim_type: left charlist: '/' source: thumbnail teaser_image: - plugin: skip_on_empty source: '@_trimmed_thumbnail' method: process - # Form the URI as stored in file_managed. plugin: concat source: - constants/s3_prefix - '@_trimmed_thumbnail' - # Look up the fid. plugin: acme_load_by_properties entity_type: file properties: uri - # Find the media entity referencing that fid. plugin: acme_load_by_properties entity_type: media properties: field_media_image # Note that for each of these entity reference fields, we skipped some content, # so need to make sure stubs aren't created for the missing content. Also note # that the source fields here are populated in a PREPARE_ROW event. related_people: plugin: migration_lookup migration: acme_people source: related_people no_stub: true related_events: plugin: migration_lookup migration: acme_event source: related_events no_stub: true tag_keyword: plugin: migration_lookup migration: acme_keyword source: keyword_ids no_stub: true destination: plugin: entity:node default_bundle: video migration_dependencies: required: - acme_image_media - acme_video_media - acme_people - acme_event - acme_keyword Auditing missing files

A useful thing to know (particularly with the client incrementally populating the S3 bucket with image files) is what files are referenced in the legacy tables but not actually in the bucket. Below is a drush command we threw together to answer that question - it will query each legacy image or video path field we're using, construct the s3:// version of the path, and look it up in the s3fs_file table to see if it exists in S3.

/** * Find files missing from S3. */ function drush_acme_migrate_missing_files() { $legacy_db = Database::getConnection('default', 'migrate'); $drupal_db = Database::getConnection(); $table_fields = [ [ 'table_name' => 'asset_metadata', 'url_column' => 'asset_path', 'date_column' => 'created', ], [ 'table_name' => 'media', 'url_column' => 'internal_url', 'date_column' => 'media_date', ], [ 'table_name' => 'person', 'url_column' => 'profile_picture_path', 'date_column' => 'created', ], // … on to 9 more columns among three more tables... ]; $header = 'uri,legacy_table,legacy_column,date'; drush_print($header); foreach ($table_fields as $table_info) { $missing_count = 0; $total_count = 0; $table_name = $table_info['table_name']; $url_column = $table_info['url_column']; $date_column = $table_info['date_column']; $query = $legacy_db->select($table_name, 't') ->fields('t', [$url_column]) ->isNotNull($url_column) ->condition($url_column, '', '<>'); if ($table_name == 'media') { $query->condition('type', 'INTERNALVIDEO'); } if ($table_name == 'people') { // This table functions much like Drupal's node table. $query->innerJoin('publishable_entity', 'pe', ''); $query->fields('pe', [$date_column]); } else { $query->fields('t', [$date_column]); } $query->distinct(); foreach ($query->execute() as $row) { $path = trim($row->$url_column); if ($path) { $total_count++; // Paths are inconsistent about leading slashes. $path = ltrim($path, '/'); // Sometimes they have doubled-up slashes. $path = str_replace('//', '/', $path); if ($table_name == 'media') { $s3_path = 's3://' . $path; } else { $s3_path = 's3://image/' . $path; } $s3 = $drupal_db->select('s3fs_file', 's3') ->fields('s3', ['uri']) ->condition('uri', $s3_path) ->execute() ->fetchField(); if (!$s3) { $output_row = "$s3_path,$table_name,$url_column,{$row->$date_column}"; drush_print($output_row); $missing_count++; } } } drush_log("$missing_count of $total_count files missing in $table_name column $url_column", 'ok'); } } Challenges and contributions

The s3fs module's primary use case is where the configured S3 bucket is used only by the Drupal site, and populated directly by file uploads through Drupal - our project was an outlier in terms of having all files in the S3 bucket first, and in sheer volume. A critical piece of the implementation is the s3fs_file table, which caches metadata for all files in the bucket so Drupal rarely needs to access the bucket itself other than on file upload (since file URIs are converted to direct S3 URLs when rendering, web clients go directly to S3 to fetch files, not through Drupal). In our case, the client had an existing S3 bucket which contained all the video files (and more) used by their legacy site, and to which they bulk uploaded image files directly so we did not need to do this during migration. The module does have an s3fs-refresh-cache command to populate the s3fs_file table from the current bucket contents, but we did have to deal with some issues around the cache table.

Restriction on URI lengths

As soon as we started trying to use drush s3fs-refresh-cache, we ran into the existing issue Getting Exception 'PDOException'SQLSTATE[22001] When Running drush s3fs-refresh-cache - URIs in the bucket longer than the 255-character length of s3fs_file's uri column. The exception aborted the refresh entirely, and because the refresh operation generates a temporary version of the table from scratch, then swaps it for the "live" table, the exception prevented any file metadata from being refreshed if there was one overflowing URI. I submitted a patch implementing the simplest workaround - just generating a message and ignoring overly-long URIs. Discussion continues around an alternate approach, but we used my patch in our project.

Lost primary key

So, once we got the cache refresh to work, we found serious performance problems. We had stumbled on an existing issue, "s3fs_file" table has no primary key. I tracked down the cause - because the uri column is 255 characters long, with InnoDB it cannot be indexed when using a multibyte collation such as utf8_general_ci. And Drupal core has a bug, DatabaseSchema_mysql::createTableSql() can't set table collation, preventing the setting of the utf8_bin collation directly in the table schema. The s3fs module works around that bug when creating the s3fs_file table at install time by altering the collation after table creation - but the cache refresh created a new cache table using only the schema definition and did not pick up the altered collation. Thus, only people like us who used cache refresh would lose the index, and those with more modest bucket sizes might never even notice. My patch to apply the collation (later refined by jansete) was committed to the s3fs module.

Scalability of cache refresh

As the client loaded more and more images into the bucket, drush s3fs-refresh-cache started running out of memory. Our bucket was quite large (1.7 million files at last count), and the refresh function gathered all file metadata in memory before writing it to the database. I submitted a patch to chunk the metadata to the db within the loop, which has been committed to the module.

Image dimensions

Once there were lots of images in S3 to migrate, the image media migrations were running excruciatingly slowly. I quickly guessed and confirmed that they were accessing the files directly from S3, and then (less quickly) stepped through the debugger to find the reason - the image fields needed the image width and height, and since this data wasn't available from the source database to be directly mapped in the migration, it went out and fetched the S3 image to get the dimensions itself. This was, of course, necessary - but given that migrations were being repeatedly run for testing on various environments, there was no reason to do it repeatedly. Thus, we introduced an image dimension cache table to capture the width and height the first time we imported an image, and any subsequent imports of that image only needed to get the cached dimensions.

In the acme_image_media migration above, we use this process plugin which takes the image URI and returns an array with width and height keys populated with the cached values if present, and NULL if the dimensions are not yet cached:

/** * Fetch cached dimensions for an image path (purportedly) in S3. * * @MigrateProcessPlugin( * id = "acme_image_dimensions" * ) */ class AcmeImageDimensions extends ProcessPluginBase { public function transform($value, MigrateExecutableInterface $migrate_executable, Row $row, $destination_property) { $dimensions = Database::getConnection('default', 'migrate') ->select('s3fs_image_cache', 's3') ->fields('s3', ['width', 'height']) ->condition('uri', $value) ->execute() ->fetchAssoc(); if (empty($dimensions)) { return ['width' => NULL, 'height' => NULL]; } return $dimensions; } }

If the dimensions were empty, when the media entity was saved Drupal core fetched the image from S3 and the width and height were saved to the image field table. We then caught the migration POST_ROW_SAVE event to cache the dimensions:

class AcmeMigrateSubscriber implements EventSubscriberInterface { public static function getSubscribedEvents() { $events[MigrateEvents::POST_ROW_SAVE] = 'import'; return $events; } public function import(MigratePostRowSaveEvent $event) { $row = $event->getRow(); // For image media, if width/height have been freshly obtained, cache them. if (strpos($event->getMigration()->id(), 'image_media') > 0) { // Note that this "temporary variable" was populated in the migration as a // width/height array, using the acme_image_dimensions process plugin. $original_dimensions = $row->getDestinationProperty('_image_dimensions'); // If the dimensions are populated, everything's find and all of this is skipped. if (empty($original_dimensions['width'])) { // Find the media entity ID. $destination_id_values = $event->getDestinationIdValues(); if (is_array($destination_id_values)) { $destination_id = reset($destination_id_values); // For performance, cheat and look directly at the table instead of doing // an entity query. $dimensions = Database::getConnection() ->select('media__field_media_image', 'msi') ->fields('msi', ['field_media_image_width', 'field_media_image_height']) ->condition('entity_id', $destination_id) ->execute() ->fetchAssoc(); // If we have dimensions, cache them. if ($dimensions && !empty($dimensions['field_media_image_width'])) { $uri = $row->getDestinationProperty('_uri'); Database::getConnection('default', 'migrate') ->merge('s3fs_image_cache') ->key('uri', $uri) ->fields([ 'width' => $dimensions['field_media_image_width'], 'height' => $dimensions['field_media_image_height'], ]) ->execute(); } } } } } } Safely testing with the bucket

Another problem with the size of our bucket was that it was too large to economically make and maintain a separate copy to use for development and testing. So, we needed to use the single bucket - but of course, the videos in it were being used in the live site, so it was critical not to mess with them. We decided to use the live bucket with credentials allowing us to read and add files to the bucket, but not delete them - this would permit us to test uploading files through the admin interface, and most importantly from a migration standpoint access the files, but not do any damage. Worst-case scenario would be the inability to clean out test files, but writing a cleanup tool after the fact to clear any extra files out would be simple enough. Between this, and the fact that images were in a separate folder in the bucket (and we weren't doing any uploads of videos, simply migrating references to them), the risk of using the live bucket was felt to be acceptable. At first, though, the client was having trouble finding credentials that worked as we needed. As a short-term workaround, I implemented a configuration option for the s3fs module to disable deletion in the stream wrapper.

Investigating the permissions issues with my own test bucket, trying to add the bare minimum permissions needed for reading and writing objects, I arrived at a point where migration worked as desired, and deletion was prevented - but uploading files to the bucket through Drupal silently failed. There was an existing issue in the s3fs queue but it had not been diagnosed. I finally figured out the cause (Slack comment - "God, the layers of middleware I had to step through to find the precise point of death…") - by default, objects are private when uploaded to S3, and you need to explicitly set public-read in the ACL. Which the s3fs module does - but, to do this requires the PutObjectAcl policy, which I had not set (I've suggested the s3fs validator could detect and warn of this situation). Adding that policy enabled everything to work; once the client applied the necessary policies we were in business…

… for a while. The use of a single bucket became a problem once front-end developers began actively testing with image styles, and we were close enough to launch to enable deletion so image styles could be flushed when changed. The derivatives for S3 images are themselves stored in S3 - and with people generating derivatives in different environments, the s3fs_file table in any given environment (in particular the "live" environment on Pantheon, where the eventual production site was taking shape) became out of sync with the actual contents of S3. In particular, if styles were generated in the live environment then flushed in another environment, the live cache table would still contain entries for the derived styles (thus the site would generate URLs to them) even though they didn't actually exist in S3 - thus, no derived images would render. To address this, we had each environment set the s3fs root_folder option so they would each have their own sandbox - developers could then work on image styles at least with files they uploaded locally for testing, although their environments would not then see the "real" files in the bucket.

We discussed more permanent alternatives and Sean Blommaert put forth some suggestions in the s3fs issue queue - ultimately (after site launch) we found there is an existing (if minimally maintained) module extending stage_file_proxy. I will most certainly work with this module on any future projects using s3fs.

The tl;dr - lessons learned

To summarize the things to keep in mind if planning on using s3fs in your Drupal project:

  1. Install the s3fs_file_proxy_to_s3 module first thing, and make sure all environments have it enabled and configured.
  2. Make sure the credentials you use for your S3 bucket have the PutObjectAcl permission - this is non-obvious but essential if you are to publicly serve files from S3.
  3. Watch your URI lengths - if the s3://… form of the URI is > 255 characters, it won't work (Drupal's file_managed table has a 255-character limit). When using image styles, the effective limit is significantly lower due to folders added to the path.
  4. With image fields which reference images stored in S3, if you don't have width and height to set on the field at entity creation time, you'll want to implement a caching solution similar to the above.

Apart from the image style issues, most of the direct development detailed above was mine, but as on any project thoughts were bounced off the team, project managers handled communication with the client, testers provided feedback, etc. Thanks to the whole team, particularly Sean Blommaert (image styles, post feedback), Kevin Thompson and Willy Karam (client communications), and Karoly Negyesi (post feedback).

Tags Migration Drupal Planet Drupal PHP Use the Twitter thread below to comment on this post:

— Virtuoso Performance (@VirtPerformance) January 9, 2019


Drupal blog: Refreshing the Drupal administration UI

1 week ago

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

Last year, I talked to nearly one hundred Drupal agency owners to understand what is preventing them from selling Drupal. One of the most common responses raised is that Drupal's administration UI looks outdated.

This critique is not wrong. Drupal's current administration UI was originally designed almost ten years ago when we were working on Drupal 7. In the last ten years, the world did not stand still; design trends changed, user interfaces became more dynamic and end-user expectations have changed with that.

To be fair, Drupal's administration UI has received numerous improvements in the past ten years; Drupal 8 shipped with a new toolbar, an updated content creation experience, more WYSIWYG functionality, and even some design updates.

A comparison of the Drupal 7 and Drupal 8 content creation screen to highlight some of the improvements in Drupal 8.

While we made important improvements between Drupal 7 and Drupal 8, the feedback from the Drupal agency owners doesn't lie: we have not done enough to keep Drupal's administration UI modern and up-to-date.

This is something we need to address.

We are introducing a new design system that defines a complete set of principles, patterns, and tools for updating Drupal's administration UI.

In the short term, we plan on updating the existing administration UI with the new design system. Longer term, we are working on creating a completely new JavaScript-based administration UI.

The content administration screen with the new design system.

As you can see on, community feedback on the proposal is overwhelmingly positive with comments like Wow! Such an improvement! and Well done! High contrast and modern look..

Sample space sizing guidelines from the new design system.

I also ran the new design system by a few people who spend their days selling Drupal and they described it as "clean" with "good use of space" and a design they would be confident showing to prospective customers.

Whether you are a Drupal end-user, or in the business of selling Drupal, I recommend you check out the new design system and provide your feedback on

Special thanks to Cristina ChumillasSascha EggenbergerRoy ScholtenArchita AroraDennis CohnRicardo MarcelinoBalazs KantorLewis Nyman,and Antonella Severo for all the work on the new design system so far!

We have started implementing the new design system as a contributed theme with the name Claro. We are aiming to release a beta version for testing in the spring of 2019 and to include it in Drupal core as an experimental theme by Drupal 8.8.0 in December 2019. With more help, we might be able to get it done faster.

Throughout the development of the refreshed administration theme, we will run usability studies to ensure that the new theme indeed is an improvement over the current experience, and we can iteratively improve it along the way.

Acquia has committed to being an early adopter of the theme through the Acquia Lightning distribution, broadening the potential base of projects that can test and provide feedback on the refresh. Hopefully other organizations and projects will do the same.

How can I help?

The team is looking for more designers and frontend developers to get involved. You can attend the weekly meetings on #javascript on Drupal Slack Mondays at 16:30 UTC and on #admin-ui on Drupal Slack Wednesdays at 14:30 UTC.

Thanks to Lauri EskolaGábor Hojtsy and Jeff Beeman for their help with this post.

File attachments:  drupal-7-vs-drupal-8-administration-ui-1280w.png carlo-content-administration-1280w.png carlo-spacing-1280w.png

FFW Agency: It’s time to start planning for Drupal 9

1 week ago
It’s time to start planning for Drupal 9 leigh.anderson Wed, 01/09/2019 - 18:24

Drupal 9 is coming. Even if it feels like you only just upgraded to Drupal 8, soon it’ll be time to make the switch to the next version. Fortunately, the shift from Drupal 8 to Drupal 9 should be relatively painless for most organizations. Here’s why.

A little background

Though tools were built in to make the upgrade from Drupal 6 or 7 to Drupal 8 run as smoothly as possible, it could still be a difficult or dramatic process. Drupal 8 marked a major shift for the Drupal world: it introduced major new dependencies, such as Symfony, and a host of new features in Core. The new structure of the software made it tricky to upgrade sites in the first place, which was complicated by the fact that it took a long time for a number of modules to be properly optimized and secured for the new version.

Drupal 9: A natural extension of Drupal 8

Fortunately, the large number of changes made to the Drupal platform in Drupal 8 have made it relatively simple to build, expand, and upgrade for the future. The new software has been designed specifically to make it simple to transition between Drupal 8 and Drupal 9, so that making the migration requires little more work than upgrading between minor version of Drupal 8.

In fact, as Dries Buytaert (the founder and project lead of Drupal) wrote recently in a blog on

Instead of working on Drupal 9 in a separate codebase, we are building Drupal 9 in Drupal 8. This means that we are adding new functionality as backwards-compatible code and experimental features. Once the code becomes stable, we deprecate any old functionality. Planning for Drupal 9

As more information is released about the new features and updates in Drupal 9, organizations should consider their digital roadmaps and how the new platform will affect them. And regardless of what your plans are feature-wise, your organization should begin planning to upgrade to Drupal 9 no later than summer of 2021. The reason for that is because the projected end-of-life for the Drupal 8 software is November of 2021, when Symfony 3 (Drupal 8’s largest major dependency) will no longer be supported by its own community.

In the meantime, the best thing your organization can do to prepare for the launch of Drupal 9 is to make sure that you keep your Drupal 8 site fully up to date.

For help planning out your Drupal roadmap, and to make sure that you’ll be ready for a smooth upgrade to Drupal 9 when it releases, contact FFW. We’re here to help you plan out your long-term Drupal strategy and make sure that your team can make the most of your site today, tomorrow, and after Drupal 9 is released.


Joachim's blog: Getting more than you bargained for: removing a Drupal module with Composer

1 week ago

It's no secret that I find Composer a very troublesome piece of software to work with.

I have issues with Composer on two fronts. First, its output is extremely user-unfriendly, such as the long lists of impenetrable statements about dependencies that it produces when it tells you why it can't make a change you request. Second, many Composer commands have unwanted side-effects, and these work against the practice that changes to your codebase should be as simple as possible for the sake of developer sanity, testing, and user acceptance.

I recently discovered that removing packages is one such task where Composer has ideas of its own. A command such as remove drupal/foo will take it on itself to also update some apparently unrelated packages, meaning that you either have to manage the deployment of these updates as part of your uninstallation of a module, or roll up your sleeves and hack into the mess Composer has made of your codebase.

Guess which option I went for.

Step 1: Remove the module you actually want to remove

Let's suppose we want to remove the Drupal module 'foo' from the codebase because we're no longer using it:

$ composer remove drupal/foo

This will have two side effects, one of which you might want, and one of which you definitely don't.

Side effect 1: dependent packages are removed

This is fine, in theory. You probably don't need the modules that are dependencies of foo. Except... Composer knows about dependencies declared in composer.json, which for Drupal modules might be different from the dependencies declared in module info.yml files (if maintainers haven't been careful to ensure they match).

Furthermore, Composer doesn't know about Drupal configuration dependencies. You could have the situation where you installed module Foo, which had a dependency on Bar, so you installed that too. But then you found Bar was quite useful in itself, and you've created content and configuration on your site that depends on Bar. Ideally, at that point, you should have declared Bar explicitly in your project's root composer.json, but most likely, you haven't.

So at this point, you should go through Composer's output of what it's removed, and check your site doesn't have any of the Drupal modules enabled.

I recommend taking the list of Drupal modules that Composer has just told you it's removed in addition to the requested one, and checking its status on your live site:

$ drush pml | ag MODULE

If you find that any modules are still enabled, then revert the changes you've just made with the remove command, and declare the modules in your root composer.json, copying the declaration from the composer.json file of the module you are removing. Then start step 1 again.

Side effect 2: unrelated packages are updated

This is undesirable basically because any package update is something that has to be evaluated and tested before it's deployed. Having that happen as part of a package removal turns what should be a straight-forward task into something complex and unpredictable. It's forcing the developer to handle two operations that should be separate as one.

(It turns out that the maintainers of Composer don't even consider this to be a problem, and as I have unfortunately come to expect, the issue on github is a fine example of bad maintainership (for the nadir, see the issue on the use of JSON as a format for the main composer file) -- dismissing the problems that users explain they have, claiming the problems are by design, and so on.)

So to revert this, you need to pick apart the changes Composer has made, and reverse some of them.

Before you go any further, commit everything that Composer changed with the remove command. In my preferred method of operation, that means all the files, including the modules folder and the vendor folder. I know that Composer recommends you don't do that, but frankly I think trusting Composer not to damage your codebase on a whim is folly: you need to be able to back out of any mess it may make.

Step 2: Repair composer.lock

The composer.lock file is the record of how the packages currently are, so to undo some of the changes Composer made, we undo some of the changes made to this file, then get Composer to update based on the lock.

First, restore version of composer.lock to how it was before you started:

$ git checkout HEAD^ composer.lock

Unstage it. I prefer a GUI for git staging and unstaging operations, but on the command line it's:

$ git reset composer.lock

Your composer lock file now looks as it did before you started.

Use either git add -p or your favourite git GUI to pick out the right bits. Understanding which bits are the 'right bits' takes a bit of mental gymnastics: overall, we want to keep the changes in the last commit that removed packages completely, but we want to discard the changes that upgrade packages.

But here we've got a reverted diff. So in terms of what we have here, we want to discard changes that re-add a package, and stage and commit the changes that downgrade packages.

When you're done staging you should have:

  • the change to the content hash should be unstaged.
  • chunks that are a whole package should be unstaged
  • chunks that change version should be staged (be sure to get all the bits that relate to a package)

Then commit what is staged, and discard the rest.

Then do a git diff of composer.lock against your starting point: you should see only complete package removals.

Step 3: Restore packages with unrelated changes

Finally, do:

$ composer update --lock

This will restore the packages that Composer updated against your will in step 1 to their original state.

If you are committing Composer-managed packages to your repository, commit them now.

As a final sanity check, do a git diff against your starting point, like this:

$ git df --name-status master

You should see mostly deleted files. To verify there's nothing that shouldn't be there in the changed files, do:

$ git df --name-status master | ag '^[^D]'

You should see only composer.json, composer.lock, and the autoloader's files.

PS. If I am wrong and there IS a way to get Compose to remove a package without side-effects, please tell me.

I feel I have exhausted all the options of the remove command:

  • --no-update only changes composer.json, and makes no changes to package files at all. I'm not sure what the point of this is.
  • --no-update-with-dependencies only removes the one package, and doesn't remove any dependencies that are not required anywhere else. This leaves you having to pick through composer.json files yourself and remove dependencies individually, and completely obviates the purpose of a package manager!

Why is something as simple as a package removal turned into a complex operation by Composer? Honestly, I'm baffled. I've tried reasoning with the maintainers, and it's a brick wall.

Tags: Composer

Matt Glaman: Writing better Drupal code with static analysis using PHPStan

1 week ago
Writing better Drupal code with static analysis using PHPStan Published on Tuesday 8, January 2019

PHP is a loosely typed interpreted language. That means we cannot compile our scripts and find possible execution errors without doing explicit inspections of our code. It also means we need to rely on conditional type checking or using phpDoc comments to tell other devs or IDE what kind of value to expect. Really there is no way to assess the quality of the code or discover possible bugs without thorough test coverage and regular review.

Karim Boudjema: 10 helpful Drupal 8 modules for 2019

1 week 1 day ago
It’s always hard to pick up the most useful Drupal 8 modules because it depends on the site you will create or manage. But there are some really helpful modules you can use in almost every situation.

In this post, I will share some modules that I use almost all the time in my Drupal 8 projects, they are not related to a particular type of site but they are always helpful, both in development or production environment.

1. Admin Toolbar

(D8) -
The Admin Toolbar module will greatly save your time. By having a drop-down menu and extending the original Drupal menu, it helps to perform various admin actions faster and easier.

The module works on the top of the default toolbar core module, therefore it is a very light module and keeps all the toolbar functionalities (shortcut / media responsive). Creating an 'Add to Calendar' Widget in Drupal

1 week 1 day ago
Creating an 'Add to Calendar' Widget in Drupal

A simple request: we need an 'Add to Calendar' widget to add our events to Google Calendar, iCal, and Outlook. Simple (once I had completed it!).

markconroy Tue, 01/08/2019 - 21:11

There's a module for that. There is, it's called, obviously, addtocalendar. It works very well, if you:

  • want to use the service,
  • want to pay for this service

If you don't want to use an external service for something as simple as adding an event to a calendar, then it looks like you'll need a custom solution. Their smallest plan only allows 2 events per month.

The PatternLab Part

Here's the custom solution I came up with (in the future, I'll look at creating a module for this with a settings/UI page for site builders). Note, it's a PatternLab implementation; if you don't use PatternLab and just want to work directly in your Drupal theme, it would be even easier.

Here's the code for the 'Add to Calendar' pattern in PatternLab (some classes and things are removed to make it easier to read):

  1. {%
  2. set classes = [
  3. "add-to-calendar"
  4. ]
  5. %}
  7. {% set ical_link = 'data:text/calendar;charset=utf8,BEGIN:VCALENDAR%0AVERSION:2.0%0ABEGIN:VEVENT%0ADTSTART:' ~ atc_start_date|date("Ymd\\THi00\\Z") ~ '%0ADTEND:' ~ atc_end_date|date("Ymd\\THi00\\Z") ~ '%0ASUMMARY:' ~ atc_title ~ '%0ADESCRIPTION:' ~ atc_details|striptags ~ '%0ALOCATION:' ~ atc_location|replace({'
    ': ' ', '
    ': ' ', '

    ': ' ', '

    ': ''}) ~ '%0AEND:VEVENT%0AEND:VCALENDAR' %}
  9. {% set google_link = '' ~ atc_title ~ '&dates=' ~ atc_start_date|date("Ymd\\THi00\\Z") ~ '/' ~ atc_end_date|date("Ymd\\THi00\\Z") ~ '&details=' ~ atc_details|striptags ~ '&location=' ~ atc_location|replace({'
    ': ' ', '
    ': ' ', '

    ': ' ', '

    ': ''}) %}
  11. {{ attributes.addClass(classes) }}>
  13. {{ google_link }}">Add to Google Calendar
  15. {{ ical_link }}">Add to iCal
  17. {{ ical_link }}">Add to Outlook

What does the above code do?

  • Creates a Google Calendar variable and creates an iCal variable. Outlook will also use iCal.
  • Uses these variables as links to add the event to their respective calendars.

Within the variables, we have some more variables (start date, end date, etc), which we should probably wrap in conditional statements so that their clauses don't print unless they are present in Drupal (some fields might be optional on your event content type, such as end time).

These variables are:

  • atc_start_date: Start Date and time
  • atc_end_date: End Date and time
  • atc_title: the name of the event
  • atc_details: description for the event
  • atc_location: place of event

In our Event pattern in PatternLab, we then have a variable called 'add_to_calendar' so that events have the option to have this widget or not. In event.twig, we simply print:

  1. {% if add_to_calendar %}
  2. {% include '@site-components/add-to-calendar/add-to-calendar.twig' %}
  3. {% endif %}
The Drupal Part

In Drupal we create a boolean field on our event content type field_event_add_to_calendar, if this is ticked, we will display the Add to Calendar widget.

Here's the code from node--event--full.html.twig

  1. {# Set the Add to Calendar Variables #}
  3. {% if node.field_add_to_calendar.value %}
  4. {% set add_to_calendar = true %}
  5. {% endif %}
  7. {% if node.field_event_date.value %}
  8. {% set atc_start_date = node.field_event_date.value %}
  9. {% endif %}
  11. {% if node.field_event_date.end_value %}
  12. {% set atc_end_date = node.field_event_date.end_value %}
  13. {% endif %}
  15. {% if node.title.value %}
  16. {% set atc_title = node.title.value %}
  17. {% endif %}
  19. {% if node.field_event_intro.value %}
  20. {% set atc_details = node.field_event_intro.value %}
  21. {% endif %}
  23. {% if node.field_event_location.value %}
  24. {% set atc_location = node.field_event_location.value %}
  25. {% endif %}
  27. ...
  29. {% include "@content/event/event.twig" %}

To explain:

If the 'Add to Calendar' boolean is on, we set the add to calendar variable as true

This in turn tells patternlab to render the Add to Calendar component.

We then check if each field we might use has a value in it - such as a start date and end date

If so, we map the values from each of those fields to variables in our Add to Calendar component (such as atc_start, atc_title, etc)

Now, when you view a node, you will see your Add to Calendar widget on any nodes that the editors choose to put it. You can see a sample of the Add to Calendar widget in my PatternLab.

Simple, once I figured it out.

Got an improvement for this? The comments are open.

Dries Buytaert: Refreshing the Drupal administration UI

1 week 1 day ago

Last year, I talked to nearly one hundred Drupal agency owners to understand what is preventing them from selling Drupal. One of the most common responses raised is that Drupal's administration UI looks outdated.

This critique is not wrong. Drupal's current administration UI was originally designed almost ten years ago when we were working on Drupal 7. In the last ten years, the world did not stand still; design trends changed, user interfaces became more dynamic and end-user expectations have changed with that.

To be fair, Drupal's administration UI has received numerous improvements in the past ten years; Drupal 8 shipped with a new toolbar, an updated content creation experience, more WYSIWYG functionality, and even some design updates.

A comparison of the Drupal 7 and Drupal 8 content creation screen to highlight some of the improvements in Drupal 8.

While we made important improvements between Drupal 7 and Drupal 8, the feedback from the Drupal agency owners doesn't lie: we have not done enough to keep Drupal's administration UI modern and up-to-date.

This is something we need to address.

We are introducing a new design system that defines a complete set of principles, patterns, and tools for updating Drupal's administration UI.

In the short term, we plan on updating the existing administration UI with the new design system. Longer term, we are working on creating a completely new JavaScript-based administration UI.

The content administration screen with the new design system.

As you can see on, community feedback on the proposal is overwhelmingly positive with comments like Wow! Such an improvement! and Well done! High contrast and modern look..

Sample space sizing guidelines from the new design system.

I also ran the new design system by a few people who spend their days selling Drupal and they described it as "clean" with "good use of space" and a design they would be confident showing to prospective customers.

Whether you are a Drupal end-user, or in the business of selling Drupal, I recommend you check out the new design system and provide your feedback on

Special thanks to Cristina Chumillas, Sascha Eggenberger, Roy Scholten, Archita Arora, Dennis Cohn, Ricardo Marcelino, Balazs Kantor, Lewis Nyman,and Antonella Severo for all the work on the new design system so far!

We have started implementing the new design system as a contributed theme with the name Claro. We are aiming to release a beta version for testing in the spring of 2019 and to include it in Drupal core as an experimental theme by Drupal 8.8.0 in December 2019. With more help, we might be able to get it done faster.

Throughout the development of the refreshed administration theme, we will run usability studies to ensure that the new theme indeed is an improvement over the current experience, and we can iteratively improve it along the way.

Acquia has committed to being an early adopter of the theme through the Acquia Lightning distribution, broadening the potential base of projects that can test and provide feedback on the refresh. Hopefully other organizations and projects will do the same.

How can I help?

The team is looking for more designers and frontend developers to get involved. You can attend the weekly meetings on #javascript on Drupal Slack Mondays at 16:30 UTC and on #admin-ui on Drupal Slack Wednesdays at 14:30 UTC.

Thanks to Lauri Eskola, Gábor Hojtsy and Jeff Beeman for their help with this post.

Palantir: Federated Search: The Demo

1 week 2 days ago
Federated Search: The Demo brandt Mon, 01/07/2019 - 14:57 Ken Rickard and Avi Schwab Jan 7, 2019

See Palantir’s federated search application in action.

We recently published a blog post introducing our solution to Google Search Appliance being discontinued—an open source application we built and named Federated Search. If you haven’t already, we recommend checking out that first blog post to get the basics on how we built the application and why. Read on here to learn how you can see for yourself what the application does.

Search API Federated Solr is a complex application, and the best way to understand what's going on is to see it in action! Since the application requires a Solr instance in addition to a number of Drupal modules, we're not able to use for demos. Instead, we've bundled all of the pieces together with Palantir's open source dev tools — the-vagrant and the-build — for a seamless demo experience that runs in a local virtual machine (VM) running on Vagrant. Head to GitHub to review the requirements, and then clone the repo and get started.

Setting up the environment

The-vagrant is a customizable vagrant environment that can be built into a project from scratch or easily retrofit an existing project (such as a new support client). On first setup, a handy install wizard takes users through a configuration process to choose hostnames, enable optional services like Solr, and enable further customization through Ansible tasks. The-vagrant is capable of handling single site, multi-site, or multiple-site (many docroot) setups in a single box, so it was a perfect match for our Federated Search environment.

The-build is a set of reusable phing targets for building Drupal projects. Once our VM is up and running, we use a standard set of these tasks to automate a number of complex tasks, such as:

  • Copying settings and services files into Drupal sites directories
  • Installing Drupal using an install profile and any existing config
  • Running post-install tasks like migrations
  • Running test suites
  • Importing databases from hosting environments
  • Deploying code to hosting environments

We have a shared set of phing targets that provide the foundation for many of these tasks, and each project extends them to meet their specific needs.

Building the demo

The Federated Search Demo repo builds a simulated multiple site environment, with a Solr server to boot, in the comfort of your own VM. Our demo site is expressly designed for both testing and development.

Because the application supports multisite, Domain Access, and standalone sites, we wanted to be able to demo (and develop for) all possible scenarios. To this end, the demo contains four docroots: Drupal 7 standalone, Drupal 7 Domain Access (coming soon), Drupal 8 standalone, Drupal 8 Domain Access. The D8 sites use the amazing core Umami profile to demo with real content, while the D7 site uses Devel Generate for some lorem ipsum-based content.

As of this writing, Domain Access is supported in the Drupal 7 module code, but not installed in the demo profile. The reverse is true for Drupal 8, and making the Drupal 8 version of Federated Search support Domain Access is under active development. We literally had to build the VM in order to finish those features!

There are a lot of dependencies involved, so let’s go to an application diagram:

There’s a lot going on there, but we suggest grabbing the repo and seeing for yourself.

What to expect

Once you clone the demo repo, there are full instructions on getting the VM and Drupal up and running. After installing all of the sites, you can start by visiting http://d8.fs-demo.local and use the search box to test a search (maybe try mushrooms, yum). You should see the React-powered search page with your results and a number of filters on the left side which you can experiment with.

Once you see the search results, you can dig in to how it works. In the Search App Settings (found at admin/config/search-api-federated-solr/search-app/settings) you can control a number of pieces of how the search page is displayed including it’s route and title. We set the page to default to ‘/search-app’ so as not to conflict with the default core configuration. Any changes made on this page should clear the cache for the search application and immediately be reflected on refresh.

Next, you may want to see how data is indexed. The search index field config page (found at admin/config/search/search-api/index/federated_search_index/fields) will show a list of all of the mapped fields the site is sending to the index. Clicking on Edit will show you the details of each, showing each bundle in the site and how it’s being sent to the index. The Edit modal includes a token picker, showing the true power of this tool—the ability to use tokens or text at the bundle level to send data to our index.

From this screen, try editing the config for a field, adding a token or changing a format. Once you do that, Search API will prompt you to re-index your data.

You can do so, then refresh the search results to see the changes. You might also want to inspect the raw data being sent to Solr. To do that, visit the Solr dashboard (at http://federated-search-demo.local:8983/solr/#/drupal8/query) and execute the default query. There you can see all of the fields being sent to the index.

Coming back to the search page, inspecting the results with the React Dev Tools will help you understand how the application is handling data. Once you install the browser extension, you can inspect the app, view the React components, see props being passed through the stack, and more. For an even deeper dive into the React application, you can clone that project and build it locally.


In addition to providing a full demo environment, this repo also serves as a development environment for Search API Federated Solr and Search API Field Map. While those modules are installed by composer, the repo also links them into the ‘/src/’ directory for easy access. From there, you can add a GitHub remote or create patches for

Issues for the demo can be raised on GitHub, and issues for the modules can be on either GitHub or Be sure to read the handbook on for even more detail on how the system works.

Learn more about Federated Search in this presentation from Decoupled Days (or just view the slides).

Development Drupal Open Source

Palantir: Introducing Federated Search

1 week 2 days ago
Introducing Federated Search brandt Mon, 01/07/2019 - 13:38 Ken Rickard and Avi Schwab Jan 7, 2019

Search API Federated Solr is’s open source solution to federated search.

Last year, Google announced Google Search Appliance would be discontinued. This announcement means that enterprise clients needing a simple yet customizable search application for their internal properties will be left without a solution some time in 2019.

As the request of an existing client, Palantir has worked for the past year to produce a replacement for the GSA and other federated search applications using open-source tools. We abstracted this project into a reusable product to index and serve data across disparate data sources, Drupal and otherwise, and we’re now happy to share it with the community.

What is Federated Search?

We have created an application that allows you to index multiple Drupal (or other) sites to a single search application, and then serve the results out in a consistent manner with a drop-in application that will work on any site where you’re able to add a little CSS and JavaScript.

Federated Search is being released publicly as an open source solution to a common problem. It works out-of-the-box, and can also be customized. There are three main parts to the product:

  • Content indexing via Drupal integration (provided)
  • Result serving via React application (provided)
  • Data storage in a Solr backend (required; we can recommend SearchStax as an option.)
How was Federated Search built?

Every search application, no matter what the implementation, has three main parts: the source, the index, and the results.

Working from the results backward, we began with identifying a schema in which all of our source data would be stored. A basic review of search pages across the internet reveals a fairly common set of features. A title, some descriptive text, and a link are the absolute minimum for displaying search results. Some extra metadata like an image, date, and type are also useful to give the user a richer experience and some filter criteria. Finally, since we’re searching across sites, we’ll need some data about where the item comes from.

With that schema in mind, and knowing Drupal would be our data source, we identified a need to get data from some unknown structure in Drupal (because every site might have vastly different content types) into a fixed set of buckets. Since much of the terminology is the same, the Metatag module quickly came to mind — Metatag allows users to take data from Drupal fields using Tokens and output it into specific meta-tags on the site. With that same pattern in mind, we built Search API Field Map. This module allows us to use tokens to set bundle-level patterns, which all get indexed into the same field in our index.

At Palantir, search is part of every project. We’ve implemented numerous custom and complex search configurations, and almost every time we lean on Apache Solr for our backend. Solr is a CMS-agnostic search index that has a well-supported and robust existing toolchain for Drupal. Search API and Search API Solr provided a solid groundwork from which to build our source plugins, so then the last step was getting our data out. Solr comes out of the box with “Response Writers” that cover almost every known data format, so our options were wide open.

We knew we wanted to provide our client with a CMS-agnostic drop-in interface and that we had a data source that’s fluent in JSON, so that immediately pointed us in the direction of a Javascript framework. The JS space is incredibly dense at the moment, but after some investigation, we settled on React to provide us the robust data management and user interface for our search application.

We started with an existing framework to provide the query handlers and basic front-end components, then extended it with our own set of component packs to build out the user interface. Search API Federated Solr provides the React application as a Drupal library, adds a search block, and surfaces some custom per-site configuration for the search application.

A Flexible, Open Source Search Solution

With Drupal, Solr, and React working together, we’re able to index data from completely arbitrary sources, standardize it, and then output it in an easily consumable way. This approach means more flexibility for site administrators and a cleaner experience for users.

A number of commercial applications exist to provide this functionality, but our solution provides a number of benefits:

  • Keeping the data source tightly coupled with Drupal allows for maximum customization and access to the source content.
  • Providing a decoupled front-end allows us to surface results anywhere, even outside of Drupal.
  • Being built on 100% open-source code allows for community improvement and sharing.
How can you use this or download the code?

Between the Drupal modules and React code, there’s a lot going on to make this application work, and even with those, you’ll still need to bring your own Solr backend to index the data. Luckily, we’ve put all those pieces together into a fully functional demo box using Palantir’s open source Vagrant environment and build tasks.

If you’d like to inspect the pieces individually, here they are:

Palantir plans to maintain these projects as a cohesive unit moving forward, and pull requests or D.o issues on the projects above are always welcome.

Does it have to be a Drupal site?

No! While we provide everything needed to index a Drupal 8 or Drupal 7 site, there’s no reason you can’t configure an additional data source to send content to the same Solr index, as long as it conforms to the required schema. The front-end is also CMS-agnostic, so you could search Drupal sites from Wordpress, another CMS, or even from a statically generated site.

You can read how to see Federated Search in action in our Demo blog post or learn more about Federated Search in this presentation from Decoupled Days (or just view the slides).

Development Drupal Open Source Industries Higher Education

Acro Media: Online Cannabis Sales: Open Source vs. SaaS

1 week 2 days ago
What we can learn from day one of legal online cannabis sales in Canada

On October 17, 2018, Canada took a progressive step forward as the sale of recreational cannabis became legal for the entire country. It was the end of a prohibition, sparking a wave of new business opportunity. It’s hard to find official numbers for Canada as a whole, but it’s estimated that there were about 212,000 first-day sales across the country worth approximately $28 million! We thought it would be a good opportunity to show some of the benefits of open source vs. SaaS solutions for online cannabis.

First off, It’s hard to say exactly how many transactions occurred online for Canada as a whole. It’s up to each province and territory to decide how they want sales to proceed and stats are quite limited at this point. We do, however, have solid information for a couple smaller provinces that we can start with. Then we can expand with speculation after that.

What we know Cannabis Yukon

Cannabis Yukon, the Yukon government run retail outlet, had a combined online and in-store sales totalling about $59,900 (source). About 25% of that number, roughly $15,000, was transacted online. The online retail outlet uses the open source platform Drupal.

PEI Cannabis

PEI Cannabis, the Prince Edward Island government run retail outlet, had a combined online and in-store sales totalling about $152,000 (source). About 7% of that number, roughly $21,000, was transacted online. The online retail outlet uses the SaaS platform Shopify. It’s interesting to note that Shopify also runs the provincial online pot shops for Ontario, British Columbia and Newfoundland.

Functionality is the same

All ecommerce cannabis outlets in Canada, government or private, are going to have the same features. They need to block access to minors, they need to sell products based on weight and they need to restrict the maximum amount of cannabis an individual can purchase at one time. All other functionality required is standard ecommerce. Functionality-wise, Cannabis Yukon and PEI Cannabis do the same thing. Whether it’s open source or SaaS, there isn’t an edge either way there.

Where open source has the advantage

Where it gets interesting, and where the Yukon Government is in a great position to succeed, is commerce architecture and service fees. These are a couple of big reasons why open source is really catching fire in the ecommerce marketplace.

Commerce architecture

Yukon Cannabis is built on the Drupal platform. It’s open source software meaning there are no service fees to use and anyone who uses it can customize and innovate the software however they like. Development can be done in-house or with any 3rd party development agency familiar with the underlying code, mainly PHP.

An advantage to using a platform like Drupal is that it can integrate and talk to other services your operation may use for accounting, marketing, inventory, customer management, etc. Integrations and automation eliminate swivel chair processes that restrict business growth.

PEI Cannabis, on the other hand, is somewhat vendor locked using the Shopify platform. Shopify does have a rich ecosystem of integrations, but if there’s ever a need to develop a new integration, PEI Cannabis is restricted to dealing with only Shopify or their small group of partners. That usually means high cost.

Service fees

When a sale is made using a SaaS platform, a certain percentage of the sale is lost to taxes and additional platform specific transaction fees. In the case of Shopify Plus, the enterprise fee structure is $2,000 per month + 0.25% per transaction, capping at a maximum of $42,000 per month (source). You can optionally use ‘Shopify Payments’ instead which carries a transaction fee of 1.6% + 30 cents per transaction. This would be a better way to go only if you don’t require any other payment gateways, but in our experience that isn’t the case. Finally, in addition to Shopify’s fees, the platform has an extension library to extend the functionality to your store. Most of these extensions carry their own monthly fee and there’s a very good chance you would need some of them.

With SaaS ecommerce platforms like Shopify, year after year the cost of ownership increases. At minimum, the yearly fees paid to Shopify amount to $24,000 and can rise as high as $480,000. That doesn’t include any additional extensions that you use or any payment gateway fees. PEI Cannabis must pay these fees (and so do the governments of BC, Ontario and Newfoundland who also use Shopify).

Open source ecommerce platforms, on the other hand, don’t necessarily have any of these additional fees. Aside from the standard payment gateway fees and hosting fees, Yukon Cannabis pays no additional monthly or yearly licensing fee to use their ecommerce platform. Whether they sell $15,000 or $15 million, the investment that they’ve made into the development of their website should pay for itself quite quickly, potentially within a year.

Furthermore, provincial government cannabis retailers are essentially public companies. A large portion of the profit made is to be distributed at the provincial and federal levels to support various public services and initiatives. By utilizing open source technology and therefore avoiding platform-specific fees, the Yukon government will have more capital available for their public services and initiatives. Yukon constituents should be quite happy about that!

By utilizing open source technology and therefore avoiding platform-specific fees, the Yukon government will have more capital available for their public services and initiatives. Yukon constituents should be quite happy about that!

Service fee breakdown

Here’s a rough breakdown of potential monthly and annual platform service fees based on some of the numbers we know. We know the combined (online and in-store) sales from day one were elevated due to the hype of legalization, and we know that BC sales dropped by 70% on day two. For our fee breakdown, we’ll take the 70% reduced amount from the combined total numbers we know and use that to calculate a 30 day monthly sales estimate. We’ll use the combined total because most ecommerce platforms also support an official in-store point of sale component. This is all speculation of course, but it still shows realistic ecommerce sales numbers and how service fees accumulate based on them.

While the numbers shown below may appear to be quite large at first, Statistics Canada, the national statistics government agency, predicted back in September that legal cannabis sales for the first 3 months will be between $816 million and $1 billion nationwide. If that ends up being true, the numbers below would actually be grossly underestimated!

Est. Monthly Sales
Based on 30% of day one total x 30 days (XX/100 x 30) x 30 Open source
Annual and Monthly Fee: 0% Shopify Plus
Monthly including transaction fee
(calculator) Shopify Plus
(monthly x 12) Yukon Cannabis
30 day est: $539,100
Day one: $59,900 $0 $2,994.31 $35,931.72 PEI Cannabis
30 day est: $1,368,000
Day one: $152,000 $0 $4523.13 $54,277.56 Nova Scotia
30 day est: $5,940,000
Day one: $660,000 $0 $12,955.69 $155,468.28 Alberta
30 day est: $6,870,000
Day one: $730,000 $0 $14,670.97 $176,051.64 All of Canada *
30 day est: $252,000,000
Day one: $28,000,000 $0 $40,000 (cap)  $480,000 (cap)

* The government agency Statistics Canada predicts that legal cannabis sales in Canada will be between $816 million and $1 billion (source).

Where SaaS has the advantage

The biggest advantage that SaaS such as Shopify has over open source is the speed at which you can get your product to market and the simplicity of use.

If you’re just starting out and need to get an ecommerce site up and running quick, these services are turn-key and can get your product to market fast. The website management interface is clean and easy to use, and most people can do what they need to do with little to no training.

There is a reason why companies like Shopify are quite dominant and it’s largely because of the simplicity. While we strongly believe that you shouldn’t choose your platform based on features, many people are willing to pay extra to be able to do it all themselves.


Watching a new industry unfold in Canada has been fun. It’s interesting to see that both open source and SaaS has found its way into the legal cannabis marketplace. Clearly both open source and SaaS work for this industry, it’s more about what you’re willing to pay and what ecommerce ecosystem you think is best for your business and its future growth.

If you’re thinking about online cannabis retail (or any other online retail for that matter), Acro Media has the expertise and processes in place to help guide you to online commerce success. Try our Digital Commerce Assessment Tool to uncover problematic areas within your digital commerce operations. Drupal Developer's blog: Building a home automation system based on Domoticz, Xiaomi and Broadlink

1 week 2 days ago

Lullabot: JSON:API 2.0 Has Been Released

1 week 2 days ago

Note: This article is a re-post from Mateu's personal blog.

I have been very vocal about the JSON:API module. I wrote articles, recorded videos, spoke at conferences, wrote extending software, and at some point, I proposed to add JSON:API into Drupal core. Then Wim and Gabe joined the JSON:API team as part of their daily job. That meant that while they took care of most of the issues in the JSON:API queue, I could attend the other API-First projects more successfully. I have not left the JSON:API project by any means, on the contrary, I'm more involved than before. However, I have just transitioned my involvement to feature design and feature sign-off, sprinkled with the occasional development. Wim and Gabe have not only been very empathic and supportive with my situation, but they have also been taking a lot of ownership of the project. JSON:API is not my baby anymore, instead we now have joint custody of our JSON:API baby.

As a result of this collaboration Gabe, Wim and I have tagged a stable release of the second version of the JSON:API module. This took a humongous amount of work, but we are very pleased with the result. This has been a long journey, and we are finally there. The JSON:API maintainers are very excited about it.

I know that switching to a new major version is always a little bit scary. You update the module and hope for the best. With major version upgrades, there is no guarantee that your use of the module is still going to work. This is unfortunate as a site owner, but including breaking changes is often the best solution for the module's maintenance and to add new features. The JSON:API maintainers are aware of this. I have gone through the process myself and I have been frustrated by it. This is why we have tried to make the upgrade process as smooth as possible.

What Changed?

If you are a long-time Drupal developer you have probably wondered how do I do this D7 thing in D8? When that happens, the best solution is to search a change record for Drupal core to see if it change since Drupal 7. The change records are a fantastic tool to track the things changed in each release. Change records allow you to only consider the issues that have user-facing changes, avoiding lots of noise of internal changes and bug fixes. In summary, they let users understand how to migrate from one version to another.

Very few contributed modules use change records. This may be because module maintainers are unaware of this feature for contrib. It could also be because maintaining a module is a big burden and manually writing change records is yet another time-consuming task. The JSON:API module has comprehensive change records on all the things you need to pay attention when upgrading to JSON:API 2.0.


As I mentioned above, if you want to understand what has changed since JSON:API 8.x-1.24 you only need to visit the change records page for JSON:API. However, I want to highlight some important changes.

Config Entity Mutation is now in JSON:API Extras

This is no longer possible only using JSON:API. This feature was removed because Entity API does a great job ensuring that access rules are respected, but the Configuration Entity API does not support validation of configuration entities yet. That means the responsibility of validation falls on the client, which has security and data integrity implications. We felt we ought to move this feature to JSON:API Extras, given that JSON:API 2.x will be added into Drupal core.

No More Custom Field Type Normalizers

This is by far the most controversial change. Even though custom normalizers for JSON:API have been strongly discouraged for a while, JSON:API 2.x will enforce that. Sites that have been in violation of the recommendation will now need to refactor to supported patterns. This was driven by the limitations of the serialization component in Symfony. In particular, we aim to make it possible to derive a consistent schema per resource type. I explained why this is important in this article.

Supported patterns are:

  • Create a computed field. Note that a true computed field will be calculated on every entity load, which may be a good or a bad thing depending on the use case. You can also create stored fields that are calculated on entity presave. The linked documentation has examples for both methods.
  • Write a normalizer at the Data Type level, instead of field or entity level. As a benefit, this normalizer will also work in core REST!
  • Create a Field Enhancer plugin like these, using JSON:API Extras. This is the most similar pattern, it enforces you to define the schema of the enhancer.
File URLs

JSON:API pioneered the idea of having a computed url field for file entities that an external application can use without modifications. Ever since this feature has made it into core, with some minor modifications. Now the url is no longer a computed field, but a computed property on the uri field.

Special Properties

The official JSON:API specification reserves the type and id keys. These keys cannot exist inside of the attributes or relationships sections of a resource object. That's why we are now prepending {entity_type}_ to the key name when those are found. In addition to that, internal fields like the entity ID (nid, tid, etc.) will have drupal_internal__ prepended to them. Finally, we have decided to omit the uuid field given that it already is the resource ID.

Final Goodbye to _format

JSON:API 1.x dropped the need to have the unpopular _format parameter in the URL. Instead, it allowed the more standard Accept: application/vnd.api+json to be used for format negotiation. JSON:API 2.x continues this pattern. This header is now required to have cacheable 4XX error responses, which is an important performance improvement.

Benefits of Upgrading

You have seen that these changes are not very disruptive, and even when they are, it is very simple to upgrade to the new patterns. This will allow you to upgrade to the new version with relative ease. Once you've done that you will notice some immediate benefits:

  • Performance improvements. Performance improved overall, but especially when using filtering, includes and sparse fieldsets. Some of those with the help of early adopters during the RC period!
  • Better compatibility with JSON:API clients. That's because JSON:API 2.x also fixes several spec compliance edge case issues.
  • We pledge that you'll be able to transition cleanly to JSON:API in core. This is especially important for future-proofing your sites today.
Benefits of Starting a New Project with the Old JSON:API 1.x

There are truly none. Version 2.x builds on top of 1.x so it carries all the goodness of 1.x plus all the improvements.

If you are starting a new project, you should use JSON:API 2.x.

JSON:API 2.x is what new installs of Contenta CMS will get, and remember that Contenta CMS ships with the most up-to-date recommendations in decoupled Drupal. Star the project in GitHub and keep an eye on it here, if you want.

What Comes Next?

Our highest priority at this point is the inclusion of JSON:API in Drupal core. That means that most of our efforts will be focused on responding to feedback to the core patch and making sure that it does not get stalled.

In addition to that we will likely tag JSON:API 2.1 very shortly after JSON:API 2.0. That will include:

  1. Binary file uploads using JSON:API.
  2. Support for version negotiation. Allows latest or default revision to be retrieved. Supports the Content Moderation module in core. This will be instrumental in decoupled preview systems.

Our roadmap includes:

  1. Full support for revisions, including accessing a history of revisions. Mutating revisions is blocked on Drupal core providing a revision access API.
  2. Full support for translations. That means that you will be able to create and update translations using JSON:API. That adds on top of the current ability to GET translated entities.
  3. Improvements in hypermedia support. In particular, we aim to include extension points so Drupal sites can include useful related links like add-to-cart, view-on-web, track-purchase, etc.
  4. Self-sufficient schema generation. Right now we rely on the Schemata module in order to generate schemas for the JSON:API resources. That schema is used by OpenAPI to generate documentation and the Admin UI initiative to auto-generate forms. We aim to have more reliable schemas without external dependencies.
  5. More performance improvements. Because JSON:API only provides an HTTP API, implementation details are free to change. This already enabled major performance improvements, but we believe it can still be significantly improved. An example is caching partial serializations.
How Can You Help?

The JSON:API project page has a list of ways you can help, but here are several specific things you can do if you would like to contribute right away:

  1. Write an experience report. This is a issue in the JSON:API queue that summarizes the things that you've done with JSON:API, what you liked, and what we can improve. You can see examples of those here. We have improved the module greatly thanks to these in the past. Help us help you!
  2. Help us spread the word. Tweet about this article, blog about the module, promote the JSON:API tooling in JavaScript, etc.
  3. Review the core patch.
  4. Jump into the issue queue to write documentation, propose features, author patches, review code, etc.

Photo by Sagar Patil on Unsplash.

Wim Leers: JSON:API module version two

1 week 2 days ago

Mateu, Gabe and I just released JSON:API 2.0!

Read more about it on Mateu’s blog.

I’m proud of what we’ve achieved. I’m excited to see more projects use it. And I’m confident that we’ll be able to add lots of features in the coming years, without breaking backwards compatibility. I was blown away just now while generating release notes: apparently 63 people contributed. I never realized it was that many. Thanks to all of you :)

I had a bottle of Catalan Ratafia (which has a fascinating history) waiting to celebrate the occasion. Why Ratafia? Mateu is the founder of this module and lives in Mallorca, in Catalunya. Txin txin!

If you want to read more about how it reached this point, see the July, October, November and December blog posts I did about our progress.

Specbee: Drupal 8 Now or Drupal 9 later? What’s the right thing to do?

1 week 2 days ago

Did you know, on an average an adult makes about 35000 decisions each day?! And suddenly, life feels more difficult. Most are mundane but taking the right step towards an important decision can turn you into a winner. Since the release of Drupal 8 in November 2015, Drupal website owners have been in a dilemma. To upgrade or not to upgrade. To migrate to Drupal 8 now or simply wait till Drupal 9 releases and completely skip 8. And to make things more knotty, there are PHP, Symfony and other version upgrades to keep track of too.

At this point, you might wonder why choose or stick with Drupal at all when everything seems so complex and tedious. Why shouldn’t I just switch to a rather simpler CMS where I can sit back and just let my content work its magic, you ask? Here’s the thing – Drupal is an open-source content management framework that is best known for the security, robustness and flexibility it offers. Without constant and consistent updates and patches, Drupal wouldn’t have been the relevant, dependable and trusted CMS that it is today. This continuous innovation approach has helped Drupal in offering new Drupal features, advanced functionalities and security patches with every minor release.

CiviCRM Blog: Webform CiviCRM Integration: new features added in 2018 and looking ahead to 2019

1 week 3 days ago

2018 was a big year for Webform CiviCRM module. I wanted to take a moment to highlight some of the new features that were added in 2018 (with some examples/screenshots) and take a look at what's to come in 2019!

Webform CiviCRM Integration - what is this?

Webform CiviCRM is a Drupal module that in a nutshell exposes CiviCRM APIs (with which you can create CiviCRM contacts, contributions, memberships, participant registrations, activities - just about any CiviCRM Entity programmatically) to the powerful Drupal Webform module - a very popular (over 450,000 Drupal sites are using it) and highly configurable drag and drop form builder. Webform CiviCRM itself is a popular module - over 3,000 CiviCRM projects are using it. That's more users/sites than the Mosaico Extension has! Webform CiviCRM was invented by Coleman Watts (of the CiviCRM Core Team) and is supported by the CiviCRM community:

2018 - highlights is:pr is:closed updated:>2018-01-01 -> 88 closed! Some highlights include:
  • enhancements to recurring contributions via Webform CiviCRM (thanks to Biodynamics Association (USA) for co-funding this) - you can now configure your webform such that you can pay any amount (Event, Membership) in instalments as well as start a regular recurring open-ended Donation. An example of this would be swim club fees -> a full season is 10 months and costs $3,000 for the entire year. You can now configure your webform such that parents can sign up their child(ren) for Memberships/Events -> and select to pay all at once or in e.g. 10 instalments of $300/month.
  • Stripe support (thanks to contributions by Matthew Wire from MJW Consulting) - Matt has been doing a lot of work on the Stripe Extension and we've been supportive of changes he has PR-ed to Webform Civicrm module.  This means that Webform CiviCRM is now compatible with all major in-line Payment Processors: Stripe, iATS Payments and PayPal Pro.
  • being able to configure financial types (thanks to PEMAC (Canada) for funding this) - it is now possible to e.g. charge the correct Sales Tax [which is defined per Financial Type] based on a member's Province/location.

  • added line item support (thanks to Wilderness Committee (Canada) for funding this) - it is now possible to add up to 5 additional lineItems for one Contribution. So you can now do things like: make a donation, purchase a calendar, pay for postage - all on the same webform - and in combination with the financial type improvements - you can control the financial types for every line item, ensuring that (in our example) - the donation becomes eligible for Charitable Tax Receipting but the calendar purchase and the postage do not. 
  • numerous improvements re: cases, activities, memberships (thanks to people at Compucorp and Fuzion, and many others)
  • many other improvements - I apologize for missing anything/anyone!
  • Coleman and I ran a 2h sold-out Workshop on Webform CiviCRM at CiviCamp Calgary 2018. We covered lots of features and in-hindsight wished we had recorded it. Next time! Amongst many other items we covered how the Registration form for CiviCamp Calgary 2018 was built (allowing multiple participants to be signed up for multiple events and also including a Partner discount code field).

Also filed under 2018 highlights: Jacob Rockowitz officially released his Drupal 8 version of Drupal webform module - it includes wicked new features that make webforms more portable than ever and new fields like signature fields and my favourite: automated country flags for phone numbers (see screenshot further down).

Looking ahead at 2019 How can user organizations Contribute?


1 hour 16 minutes ago - aggregated feeds in category Planet Drupal
Subscribe to Planet Drupal feed