Planet Drupal

Aten Design Group: Entity Import: A User Interface for Drupal 8 Migrations

2 weeks 5 days ago

If you’ve ever wished there was a way to easily import CSV data into your Drupal website then this post is for you. With the new Entity Import module, website admins can easily add and configure new importers that map CSV source files to Drupal 8 entities. Entity Import adds a user interface for Drupal 8’s core migration functionality. It lets you add importers for any entity in the system, map source data to entity fields, configure process pipelines, and of course, run the import.

Why Another Drupal Import Module?

In Drupal 8 there are already several good options for importing and migrating content using the admin interface.

The Feeds module is one approach for importing content, with a great user interface and huge community of support. Feeds has been around for 7+ years, has been downloaded 1.8 million times, and, according to its Drupal project page, is being used on more than 100,000 websites.

The Migrate module, a part of core for Drupal 8, is an incredibly powerful framework for migrating content into Drupal. With Migrate, developers can create sophisticated migrations and leverage powerful tools like rollback, Drush commands, and countless process plugins for incredibly complex pipelines.

We use both Migrate and Feeds extensively. (Check out this recent post from Joel about getting started with Migrate.) Recently, though, we needed something slightly different: First, we wanted to provide Drupal admins with a user interface for configuring complex migrations with dependencies. Second, we needed Drupal admins to be able to easily run new imports using their configured migrations by simply uploading a CSV. Essentially, we wanted to give Drupal admins an easy-to-use control panel built on top of Drupal core’s migrate system.

My First Use Case: Complex Data Visualizations

Here at Aten, we do a lot of work helping clients with effective data visualizations. Websites like the Guttmacher Institute’s data center and the Commonwealth Fund’s 2018 interactive scorecard are great examples. When I started working on another data-heavy project a few months ago, I needed to build a system for importing complex datasets for dynamic visualizations. Drupal’s core migration system was more than up for the task, but it lacks a simple UI for admins. So I set about building one, and created Entity Import.

Getting Started with Entity Import Download and Install

Entity Import is a Drupal module and can be downloaded at https://Drupal.org/project/entity_import. Alternatively, you can install Entity Import with composer:

composer require drupal/entity_import

Entity Import is built directly on top of Drupal’s Migrate module, and no other modules are required.

Adding New Importers

Once the Entity Import module is installed, go to Admin > System > Entity Import to add new importers. Click “Add Importer.”

For each importer, you will need to provide:

  • Name - In my case I used “Dataset,” “Indicators,” and “Topics.” Generally speaking, I would name your importers after whatever types of data you are importing.
  • Description - An optional description field to help your Drupal administrators.
  • Page - Toggle this checkbox if you want to create an admin page for your importer. Your administrators will use the importer page to upload their CSV files.
  • Source Plugin - At the time of this writing, Entity Import provides just one source plugin: CSV. The system is fully pluggable, and I hope to add others – like XML or even direct database connections – in the future.
  • Plugin Options - Once you choose your source Plugin (i.e. CSV) you’ll have plugin-specific configuration options. For CSVs, you can choose whether or not to include a header row, as well as whether or not to support multiple file uploads.
  • Entity Type - Specify which entity type your data should be imported into.
  • Bundles - Once you pick a entity type, you can choose one or more bundles that your data can be imported into.
Configuring Importers

Each Importer you add will have its own configuration options, available under Admin > Configuration > System > Entity Importer. Configuration options include:

  • Mappings - Add and configure mappings to map source data to the appropriate destination fields on your entity bundles. (Important side note: when creating mappings, the human readable name can be whatever you wish; the machine name, however, needs to match the column header from your source data.)
  • Process Plugins - When you add new mappings you can specify one or more process plugins directly in the admin interface. This is where things get interesting – and really powerful. Drupal 8 provides a number of process plugins for running transformations on data to be migrated (read more about process plugins in Joel’s recent migrations post). With Entity Import, you can specify one or more process plugins and drag them into whatever order you wish, building process pipelines as complicated (or simple) as you need. Your admins can even manage dependencies on other imports; for example, mapping category IDs for an article importer to categories from a category importer. Again, no coding necessary. Entity Import provides a user interface for leveraging some of Migrate’s most powerful functionality.
Importing the Data

Once you’ve added and configured importers, go to Admin > Content > [Importer Name] to run a new import. You’ll see three tabs, as follows:

  • Import - This is the main import screen with upload fields to upload your CSV(s). If your import has dependencies specified in any of its process plugins, the import(s) it depends on will show up on this screen as well. (TODO: Currently, the interface for managing multiple, interdependent imports is a little complicated. I’d like to make it easier for users to visualize the status of all interdependent migrations at-a-glance.)
  • Actions - After you run an import, you can rollback using the options on the actions tab. (TODO: I’d like to add other actions as well; for example, the ability to change or reset import statuses.)
  • Logs - Migration logs are listed for each import, allowing admins to quickly see if there are any errors. You can quickly filter logs by message type (i.e. “notice” or “error”).
My Second Use Case: Importing Courses for a University

Soon after wrapping up a working prototype for the data visualization project I mentioned above, I was tasked with another project. A prominent university client needed to quickly import an entire course catalog into Drupal. Beyond running a single import, this particular organization needed the ability to upload new CSVs and update the catalog at any point in the future. The use case was a perfect match for Entity Import. I installed the module, spent a few minutes adding and configuring the course importer, and presto!

Next Steps for Entity Import

Writing clean, reusable code packaged as modules is a huge benefit for Drupal development workflows. Even better, Drupal.org module pages provide really great tools for collaboration with features like issue tracking, release management, and version control built into the interface. I have a few TODOs that I’ll be posting as issues in the days ahead, and I am excited to see if Entity Import fills a need for others like it has for me.

If you run a data or content -intensive website and have trouble keeping everything up to date, Entity Import might be just the ticket. We’d love to give you a quick demo or talk through how this approach might help – just give us a shout and we’ll follow up!

Promet Source: Web Accessibility Overlays: True Fix or False Pretense?

2 weeks 5 days ago
With the increased number of accessibility lawsuits for inaccessible websites, it's no wonder that offers for quick fixes are a hot commodity. Unfortunately, the saying, “You get what you pay for” may not apply to accessibility overlay solutions. So, what do you do? First, let’s take look at how quick-fix web accessibility overlay solutions actually work.

Jacob Rockowitz: Open email asking organizations to back the Webform module and Drupal-related Open Collectives

2 weeks 6 days ago

Following up from my previous blog post, "Asking organizations to back a Drupal-related Open Collective."

Below is the email I am sending to organizations within the Drupal community asking them to become a $10 monthly backer of the Webform module and Drupal-related Open Collectives. This email will be sent to people I have spoken to directly at Drupal camps and meetups, as well as organizations listed on the Drupal marketplace.

Read More

OPTASY: 3 Types of Content Management Systems to Consider in 2019: Traditional CMS vs Headless CMS vs Static Site Generators

2 weeks 6 days ago
3 Types of Content Management Systems to Consider in 2019: Traditional CMS vs Headless CMS vs Static Site Generators radu.simileanu Tue, 02/26/2019 - 18:37

Kind of stuck here? One one hand, you have all these software development technologies that are gaining momentum these days —  API, serverless computing, microservices — while on the other hand, you have a bulky "wishlist" of functionalities and expectations from your future CMS.  So, what are those types of content management systems that are and will be relevant many years to come and that cover all your feature requirements?

And your list of expectations for this "ideal" enterprise-ready content infrastructure sure isn't a short one:
 

Drupal blog: Regarding critical security patches, we hear your pain.

2 weeks 6 days ago

This post was created jointly by Michael Hess of the Security Working Group, and Tim Lehnen, Executive Director of the Drupal Association.

Last year, with the security release of SA-CORE-2018-002, the most significant security vulnerability since 2014, we heard the pain of site owners and development teams around the world staying up at all hours waiting for the complex security release process to complete and the patch to drop. We heard the pain of agencies and end-user organizations required to put teams on late shifts and overtime. We heard from some users who simply couldn't respond to patch their sites on the day of release, because of lack of resources or entrenched change management policies.

We've heard calls from the community for rotating the timezones for security advisories from release to release, or for having more on-call support from security team members across the globe, or simply for a longer horizon between the release of PSA and SA.

Yet at the same time, we're cognizant that these solutions would put increased burden on a security team composed of dedicated volunteers and contributors. There are a number of generous organizations who sponsor many of the members of the security team, but relying on their altruism alone is not a sustainable long-term solution—especially if we consider expanding the role of the security team to address the larger pain points above.

Last week, with the release of SA-CORE-2019-003, we heard these concerns for site owners and the sustainability of the security team echoed again.

The Security Team and the Drupal Association have been developing solutions for this issue for well over a year.

The goals are simple:

  • Provide a new service to the Drupal community, from small site owners to enterprise-scale end users, to protect their sites in the gap from security release to the time it takes them to patch.
  • Create a new model for sustainability for the Security Team, generating funding that 1) covers the operating costs of the program 2) can support security team operations and 3) can support additional Drupal Association programs.

Although the execution will take care and careful partnership, we are happy to announce that we've found a solution.

We're tentatively calling this: Drupal Steward. It is a service to be provided by the Drupal Association, the Security team, and carefully vetted hosting partners.

Drupal Steward will offer sites a form of mitigation through the implementation of web application firewall rules to prevent mass exploitation of some highly critical vulnerabilities (not all highly critical vulnerabilities can be protected in this fashion, but a good many can be - this method would have worked for SA-CORE-2018-002 for example).

It will come in three versions:

  • Community version - for small sites, low-budget organizations, and non-profits, we will offer a community tier, sold directly by the DA. This will be effectively at cost.
  • Self hosted version - for sites that are too large for the community tier but not hosted by our vendor partners.
  • Partner version - For sites that are hosted on vetted Drupal platform providers, who have demonstrated a commitment of contribution to the project in general and the security team in particular, protection will be available directly through these partners.
Next Steps

The Drupal Association and Security Team are excited to bring this opportunity to the Drupal Community.

We believe that the program outlined above will make this additional peace of mind accessible to the broadest base of our community possible, given the inherent costs, and are hopeful that success will only continue to strengthen Drupal's reputation both for one of the most robust security teams in open source, and for innovating to find new ways to fund the efforts of open source contributors.

We will announce more details of the program over the coming weeks and months as we get it up and running.

If you are a hosting company and are interested in providing this service to your customers, please reach out to us at drupalsteward@drupal.org.

Please also join us at DrupalCon for any questions about this program.

If you are a site owner and have questions you can join us in slack #drupalsteward.

For press inquiries, please contact us at: security-press@drupal.org

Dropsolid: Machine Learning for optimizing search results with Drupal & Apache Solr

2 weeks 6 days ago
26 Feb

Recently, a client of ours shared with us their frustration that their website’s internal site search results didn’t display the most relevant items when searching for certain keywords. They had done their homework and provided us with a list of over 150 keywords and the expected corresponding search result performance. Their site was built with Drupal 7 and relied on Search API & Search API Solr for content indexing and content display.
 

Language Processing

Since the website was in Dutch, we started by applying our usual best practices for word matching. No instance of Apache Solr search will work properly if the word matching and language understanding haven’t been appropriately configured. (see also this blog post)

For this particular website, many of the best practices for Apache Solr Search hadn’t been followed. For example, you always need to make sure to store the rendered output of your content and additional data (such as meta tags) in one single field in the index. This way, it’s a lot easier to search all the relevant data. It will make your optimizations easier further down the road and it will render your queries a lot smaller & faster. Also make sure to filter out any remaining HTML code, as it does not belong in the index. Last but not least, make sure to index this data using as little markup as possible, and get rid of the field labels. You can do this by assigning a specific view mode to each content type. If you’re having trouble with these basics, just get in touch with us so we can give you a hand.
 

Boosting

Once this has been fixed, you can use the boosting functionality in Apache Solr to prioritize certain hypotheses on how much more important a certain factor will be in the overall generation of results. The client website that I mentioned earlier, for example, had some custom code written into it to boost the content based on the content type during the time of indexing.

Apache Solr works with relevance scores to determine where a result should be positioned in relation to all of the other results. Let’s dig into this a bit deeper.

http://localhost:8983/solr/drupal7/select? q=ereloonsupplement& // Our term that we are searching for defType=edismax& // The eDisMax query parser is designed to process simple phrases (without complex syntax) entered by users and to search for individual terms across several fields using different weighting (boosts) based on the significance of each field while supporting the full Lucene syntax. qf=tm_search_api_aggregation_1^1.0& // Search on an aggregated Search Api field containing all content of a node. qf=tm_node$title^4.0& // Search on a node title qf=tm_taxonomy_term$name^4.0& // Search on a term title fl=id,score& // Return the id & score field fq=index_id:drupal7_index_terms& // Exclude documents not containing this value fq=hash:e3lda9& // Exclude documents not containing this value rows=10& // Return top 10 items wt=json& // Return it in JSON debugQuery=true // Show me debugging information

When looking at the debugging information, we can see the following:

"e3lda9-drupal7_index_terms-node/10485": "\n13.695355 = max of:\n 13.695355 = weight(tm_search_api_aggregation_1:ereloonsupplement in 469) [SchemaSimilarity], result of:\n 13.695355 = score(doc=469,freq=6.0 = termFreq=6.0\n), product of:\n 6.926904 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n 10.0 = docFreq\n 10702.0 = docCount\n 1.9771249 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from:\n 6.0 = termFreq=6.0\n 1.2 = parameter k1\n 0.75 = parameter b\n 248.69698 = avgFieldLength\n 104.0 = fieldLength\n",


This information shows that item scores are calculated based on the boosting of the fields that our system has had to search through. We can further refine it by adding other boost queries such as:

bq={!func}recip(rord(ds_changed),1,1000,1000)


This kind of boosting has been around in the Drupal codebase for a very long time and it even dates back to Drupal 6. It is possible to boost documents based on the date they were last updated, so that more recent documents will end up with higher scores.
 

Solved?

Sounds great, right? Not quite for the customer, though!

As the client is usually going back and forth with the development company for small tweaks, every change you make as a developer to the search boosting requires a full check on all the other search terms. This needs to be done to make sure the boosting you are introducing doesn’t impact other search terms. It’s a constant battle - and it’s a frustrating one. Even more so because in the above scenario the result that was displayed at the top wasn’t the one that the client wanted to show up in the first place. In the screenshot you can see that for this particular query, the most relevant result according to the customer is only ranked as number 7. We’ve had earlier instances where the desired result wouldn’t even be in the top 50!

To tackle this, we use an in-house application that allows end users to indicate which search results are ‘Not Relevant’,  ‘Somewhat Relevant’, ‘Relevant’ or ‘Highly Relevant’, respectively. The application sends direct queries to Solr and allows the client to select certain documents that are relevant for the query. Dropsolid adjusted this so that it can properly work with Drupal Search API Solr indexes for Drupal 7 and 8.

We’ve used the application from the screenshot to fill in all the preferred search results when it comes down to search terms. In the background, it translates this to a JSON document that lists the document IDs per keyword and their customer relevance score from 0 to 3, with 3 being the most relevant.

This is a very important step in any optimization process, as it defines a baseline for the tests that we are going to perform.
 

Footnote: This image is a separate application in Python based on a talk of Sambhav Kothari from Bloomberg Tech at Fosdem.
 

Learning To Rank

We quote from the Apache Solr website: “In information retrieval systems, Learning to Rank is used to re-rank the top N retrieved documents using trained machine learning models. The hope is that such sophisticated models can make more nuanced ranking decisions than standard ranking functions like TF-IDF or BM25.

Using the baseline that we’ve set, we can now calculate how well our original boosting impacts the search results. What we can calculate, as shown in the screenshot above, is the F-Score, the recall & precision of the end result when using our standard scoring model.

Precision is the ratio of correctly predicted positive observations to the total predicted positive observations.
The question that Precision answers is the following: of all results that labeled as relevant, how many actually surfaced to the top? High precision relates to the low false positive rate.” The higher, the better, with a maximum of 10.
Src: https://blog.exsilio.com/all/accuracy-precision-recall-f1-score-interpretation-of-performance-measures/

Recall is the ratio of correctly predicted positive observations to the all observations in actual class.

The question recall answers is: Of all the relevant results that came back, what is the ratio compared to all documents that were labeled as relevant? The higher the better, with a maximum of 1.
Src: https://blog.exsilio.com/all/accuracy-precision-recall-f1-score-interpretation-of-performance-measures/

If we just look at the top-5 documents, the combined F-Score is 0.35, with a precision of 0.28 and a recall of 0.61. This is quite bad, as only 60% of our relevant documents appear in the top 5 of all search queries. The precision tells us that from the top-5 documents, only 30% has been selected as relevant.
 

Training our model

Before all of this can work, we have to let Solr know what possible features exist that it might use to decide the importance of each feature based on feedback. An example of such a feature could be the freshness of a node - based on the changed timestamp in Drupal -, or it could just as well be the score of the query against a specific field or data such as meta tags. For reference, I’ve listed them both below:

{ "name":"freshnessNodeChanged", "class":"org.apache.solr.ltr.feature.SolrFeature", "params":{ "q":"{!func}recip( ms(NOW,ds_node$changed), 3.16e-11, 1, 1)" }, "store":"_DEFAULT_" }, { "name":"metatagScore", "class":"org.apache.solr.ltr.feature.SolrFeature", "params":{ "q":"{!edismax qf=tm_metatag_description qf=tm_metatag_keywords qf=tm_metatag_title}${query}" }, "store":"_DEFAULT_" }

Using the RankLib library (https://sourceforge.net/p/lemur/wiki/RankLib/), we can train our model and import it into Apache Solr. There are a couple of different models that you can pick to train - for example Linear or Lambdamart - and you can further refine the model to include the number of trees and metrics to optimize for.
You can find more details at https://lucene.apache.org/solr/guide/7_4/learning-to-rank.html
 

Applying our model

We can, using the rq parameter, apply our newly trained model and re-rank the first 100 results according to the model.

rq={!ltr+efi.query=ereloonsupplement+model=lambdamart-NDCG@10-100-2019-02-11-12:24+reRankDocs=100}

If we look at the actual result, it shows us that the search results that we’ve marked as relevant are suddenly surfacing to the top. Our model assessed each property that we defined and it learned from the feedback! Hurray!
 


We can also compare all the different models. If we just look at the top-5 documents, the combined F-Score of our best performing model is 0.47 (vs 0.35), with a precision of 0.36 (vs 0.28) and a recall of 0.89 (vs 0.61) This is a lot better, as 90% of our relevant documents appear in the top 5 of all search queries. The precision tells us that from the top-5 documents, 36% has been selected as relevant. This is a bit skewed, though, as for some results we only have one highlighted result.

So, to fully compare, I did the calculations it for the first result. With our original model we only see 46% of our desired results pop up as the first result. With our best-performing model, we improve this score to 79%!

Obviously we still have some work to do to turn 79% up to 100%, but I would like to stress that this result was achieved without changing a single thing to the client’s content. The remaining few cases are missing keywords in the meta tags of Dutch words that somehow are not processed correctly in Solr.
 


Of course we wouldn’t be Dropsolid if we hadn’t integrated this back into Drupal! Are you looking to give back to the community and needing a hand to optimize your search? Give us the opportunity to contribute this back for you - you won’t regret it.
 

In brief

We compiled a learning dataset, we trained our model and uploaded the result to Apache Solr. Next, we used this model during our queries to re-rank the last 100 results based on the trained model. It is still important to have a good data model, which means getting all the basics on Search covered first.

Need help with optimizing your Drupal 7 or Drupal 8 site search?

Contact us

Nick Veenhof

Sooper Drupal Themes: Expectations for Drupal 9 in 2020

2 weeks 6 days ago
Turmoil in the Drupal community?

Considering the fact that there are around 800.000 websites currently operating on Drupal 7, there will be a huge resource drain for upgrading to the latest installment. Not only will be it be an expensive feat to achieve, but also time demanding. Quite frankly speaking, there is not enough time to be able to upgrade all of the Drupal 7 websites and also not enough Drupal developers to be able to take on the workload. So, what do you think, is it feasable for so many websites to upgrade to Drupal 9 in such a short period of time?

Drupal 9 will be released in the summer of 2020

Drupal 8 has been released on November 19, 2015, this makes it 3 years old already. Its successor, Drupal 9, is making its way towards a release. Drupal 9 is scheduled to be released on 3rd June in 2020. But what does this mean for Drupal 7 and 8?

For starters, Drupal 8 and 7 will stop receiving support in 2021. This is mainly because Symfony 3, one of the biggest dependencies of Drupal 8, will stop receiving support. Drupal 7 will not be backed up by the official community and by the Drupal association on Drupal.org. What this means is that the automated testing services for Drupal 7 will be shut down. On top of that, the Drupal Security Team will stop providing security patches. If you are not able to upgrade to Drupal 9, there will still be some organisations that will provide Drupal 7 Vendor Extended Support, which will be a paid service. However, despite this, there will be a approximately year's worth of time to be able to plan for and upgrade to the latest installment of Drupal.

Overview over the consequences for Drupal 7

What this means for your Drupal 7 sites is, as of November 2021:

  • Drupal 7 will no longer be supported by the community at large. The community at large will no longer create new projects, fix bugs in existing projects, write documentation, etc. around Drupal 7.
  • There will be no more core commits to Drupal 7.
  • The Drupal Security Team will no longer provide support or Security Advisories for Drupal 7 core or contributed modules, themes, or other projects. Reports about Drupal 7 vulnerabilities might become public creating 0 day exploits.
  • All Drupal 7 releases on all project pages will be flagged as not supported. Maintainers can change that flag if they desire to.
  • On Drupal 7 sites with the update status module, Drupal Core will show up as unsupported.
  • After November 2021, using Drupal 7 may be flagged as insecure in 3rd party scans as it no longer gets support.
  • Best practice is to not use unsupported software, it would not be advisable to continue to build new Drupal 7 sites.
  • Now is the time to start planning your migration to Drupal 8.

Source: https://www.drupal.org/psa-2019-02-25

Drupal promises a smooth upgrade to Drupal 9

Good news is that, the change from Drupal 8 to Drupal 9 will not be as abrupt as the change from Drupal 7 to 8 was. This is because Drupal 9 will be based off of Drupal 8, in fact, the first release of Drupal 9 will be similar to the last release of Drupal 8. In short, there will be some new features added, the deprecated code will be removed and the dependenciess will be updated, however, the Drupal experience will not be reinvented. Now, in order to have really smooth upgrade, the only thing necessary is to keep your Drupal 8 updated at all times. This will ensure that your upgrade will come as fluid as possible, without many inconveniences.

What is the best course of action to follow when upgrading to Drupal 9

Well, at first, you have a couple of options at your disposal:

  1. You either wait for Drupal 9 to be launched and then make the change from 7 directly to 9.
  2. You make first the change to Drupal 8 from 7, which is going to be an abrupt change anyway, and then you prepare for the change to Drupal 9.
  3. Your final option would be to find a new CMS altogether, which would be the most resource hungry option out of all. 

So, considering the choices you have at hand, the best of the bunch would be to start preparing for upgrade to Drupal 8 and then, when the time comes, to Drupal 9. By doing this, you will have enough time to plan ahead your upgrade roadmap, without having to compromise on the quality of the upgrade by rushing it. On top of that, this is an opportunity to review how good your website is at attracting leads and converting those leads to sales. Moreso, you can check if your website is still in line with your enterprise vision and mission statement, if not, then here is an opportunity to make your site reflect the beforenamed aspects of your business.

Even though change might look scary to some of you, this is an opportunity to also evaluate and improve your online digital presence. So make use of this chance at its fullest to create and provide a better online environment for your potential and current customers.

WeKnow: How to choose the best web development agency?

3 weeks ago
How to choose the best web development agency?

So you are looking out for help... Either you have a product idea to develop from scratch or need to augment your IT team, basically; you are looking for the best fit. This read will support you to set the right expectations as well as how to evaluate your future business partner effectively to gain a lifetime relationship, a technical team or the best short-term service.

Let us explore both perspectives: yours and the one of the agencies you hire, so we urge you not to filter by price but by expertise and better overall experience. 

Here are our 4 steps to finding the best suitable software agency: 

1. What is expected from you?

Do you know your scope and the objectives of your project? 
This will help you know what to delegate.  Use this general product development list as an idea:

giochacon Mon, 02/25/2019 - 18:18

Security public service announcements: Drupal 7 will reach end-of-life in November of 2021 - PSA-2019-02-25

3 weeks ago
Date: 2019-February-25Vulnerability:  Drupal 7 will reach end-of-life in November of 2021Description: 

Drupal 7 was first released in January 2011. In November 2021, after over a decade, Drupal 7 will reach end of life (EOL). (More information on why this date was chosen.) Official community support for version 7 will end, along with support provided by the Drupal Association on Drupal.org. This means that automated testing services for Drupal 7 will be shut down, and there will be no more updates provided by the Drupal Security Team.

When this occurs, Drupal 7 will be marked end-of-life in the update manager, which appears in the Drupal administrative interface. Updates, security fixes, and enhancements will no longer be provided by the community, but may be available on a limited basis from select commercial vendors.

If you have a site that is running on Drupal 7, now is the time to start planning the upgrade. Note that the transition from Drupal 8 to Drupal 9 will not be the significant effort that the transition from 7 to 8 was. In fact, the first release of Drupal 9 will be identical to the last release of Drupal 8, except with deprecated code removed and dependencies updated to newer versions. (See Plan for Drupal 9 for more information on Drupal 9.)

What this means for your Drupal 7 sites is, as of November 2021:

  • Drupal 7 will no longer be supported by the community at large. The community at large will no longer create new projects, fix bugs in existing projects, write documentation, etc. around Drupal 7.
  • There will be no more core commits to Drupal 7.
  • The Drupal Security Team will no longer provide support or Security Advisories for Drupal 7 core or contributed modules, themes, or other projects. Reports about Drupal 7 vulnerabilities might become public creating 0 day exploits.
  • All Drupal 7 releases on all project pages will be flagged as not supported. Maintainers can change that flag if they desire to.
  • On Drupal 7 sites with the update status module, Drupal Core will show up as unsupported.
  • After November 2021, using Drupal 7 may be flagged as insecure in 3rd party scans as it no longer gets support.
  • Best practice is to not use unsupported software, it would not be advisable to continue to build new Drupal 7 sites.
  • Now is the time to start planning your migration to Drupal 8.

If, for any reason, you are unable to migrate to Drupal 8 or 9 by the time version 7 reaches end of life, there will be a select number of organizations that will provide Drupal 7 Vendor Extended Support (D7ES) for their paying clients. This program is the successor to the successful Drupal 6 LTS program. Like that program, it will be an additional paid service, fully operated by these organizations with some help from the Security Team.

The Drupal Association and Drupal Security Team will publish an announcement once we have selected the Drupal 7 Vendor Extended Support partners.

If you would like more information about the Drupal release cycle, consult the official documentation on Drupal.org. If you would like more information about the upcoming release of Drupal 9, join us at DrupalCon Seattle.

Information for organizations interested in providing commercial Drupal 7 Vendor Extended Support

Organizations interested in providing commercial Drupal 7 Vendor Extended Support to their customers and who have the technical knowledge to maintain Drupal 7 are invited to fill out the
application for the Drupal 7 Vendor Extended Support team. The application submission should explain why the vendor is a good fit for the program, and explain how they meet the requirements as outlined below.

Base requirements for this program include:

  • You must have experience in the public issue queue supporting Drupal 7 core or Drupal 7 Modules. You should be able to point to a history of such contribution. One way to measure this is issue credits, but there are other ways. You must continue this throughout your enrollment in the program. If you have other ways to show your experience, feel free to highlight them.
  • You must make a commitment to the Security Team, the Drupal Association, and your customers that you will remain active in this program for 3 years.
  • As a partner, you must contribute to at least 20% of all Drupal 7 Vendor Extended Support module patches and 80% of D7ES core patches in a given year. (Modules that have been moved into core in Drupal 8 count as part of core metrics in Drupal 7) .
  • Any organization involved in this program must have at least 1 member on the Drupal Security Team for at least 3 months prior to joining the program and while a member of the program. (See How to join the Drupal Security Team for information.) This person will need a positive evaluation of their contributions from the Security Working Group.
  • Payment of an Drupal 7 Vendor Extended Support annual fee for program participation is required (around $3000 a year). These fees will go to communication tools for the Drupal 7 Vendor Extended Support vendors and/or the greater community.
  • Payment of a $450 application fee is required.
  • Your company must provide paid support to Drupal 7 clients. This program is not for companies that don't provide services to external clients.
  • Application review process:

  1. We will confirm that each vendor meets the requirements outlined above and is a good fit for the program.
  2. If the Security Working Group does not think you are a good fit, we will explain why and decline your application. If you are rejected, you are able to reapply. Most rejections will be due to Organizations not having enough ongoing contribution to Drupal 7 and Organizations not having a Drupal Security Team member at their organization.
  3. The Drupal Association signs off on your participation in the program.
  4. If you are accepted, you will be added to the Drupal 7 Vendor Extended Support vendor mailing list.
  5. The Security Working Group will do a coordinated announcement with the vendors to promote the program.

If you have any questions you can email D7ES@drupal.org

Lullabot: Hacking Culture: The Imaginary Band of Drupal Rock Stars at Lullabot

3 weeks ago

Matthew Tift talks with James Sansbury and Matt Westgate about the history of Lullabot, building a team of Drupal experts, and moving away from the phrase "rock star." Ideas about "rock stars" can prevent people from applying to job postings, cause existing team members to feel inadequate, or encourage an attitude that doesn't work well in a client services setting. Rather than criticize past uses of this phrase, we talk about the effects of this phrase on behavior.

OpenSense Labs: OpenSocial for your platform? - Buzzinga!

3 weeks ago
OpenSocial for your platform? - Buzzinga! Vasundhra Mon, 02/25/2019 - 20:58

Click, tap, like, hit, post, tweet, retweet, repost, share, tag, comment - I am sure that you are known to all these terms, use them daily and even promote your business with it. 

We live in a world where the boundaries of work and office space are changing. A new era of transformation has opened up where collaboration and communication within the company is modified into a “Digital System” 

The heart of all this meaningful connection and real-time communication (Which is integral for modern business) is the social intranet.


And nothing beats the performance of OpenSocial, a Drupal distribution that is used for building social communities and intranet. 

OpenSocial is bringing power and the essence of pervasive social capabilities to the web.

You ask how?

Well, Let’s find out

Understand OpenSocial 

OpenSocial is an out-of-the-box solution for the online community. It is used for creating social communities, intranets, portals and any other social project. It appears with a collection of features and functionalities that are useful in constructing a social framework. 

In the Drupal community, Open Social is placed as an heir of Drupal Commons (Drupal Commons is a Drupal 7 distribution that is an out of the box community collaboration website) 


OpenSocial and its out-of-the-box feature 
  • Content types and structure

The user is offered with two content types: events and topics. The architecture lets OpenSocial be lightweight software that can easily be installed and can be used seamlessly by users. Blogs, News etc. are all identical content type as a topic but have separate taxonomy.

  • Media Management 

With the help of Media management, the user can efficiently arrange, resize and add images wherever they want to on a particular website. File System, Images Styles and all other media configurations that are needed to add, resize and adjust images are inbuilt.

  • Responsive and Multi-Lingual Support 

Open Social follows with Drupal 8 “mobile-first” theory and it is responsive “by default”. Not only this but it also consists of “Translation Module” that is used for Multilingual support.

  • SEO Optimization

The SEO strategy is based on a consultative approach. Adverbs, SEO, Social media and conversion optimization is used to generate the traffic. The out-of-the-box feature in OpenSocial helps the user to optimize their website in a way that more people visit it. 

OpenSocial Foundation and W3C Social Web Activity “Social standards are part of the application foundations for the Open Web Platform” 
-Jeff Jaffe 

In other words, they will be used everywhere, in diverse applications that operate on phones, cars, televisions, and e-readers. In terms of OpenSocial, the W3C standard is defined as:

The social web working group which determines the technical standards and API facilitates access to social functionalities as part of Open web platform.
The social interest group coordinates messaging around social at the W3C strategy that enables social business. 

Open source project at Apache Foundation

The Apache Software Foundation hosts two active and ongoing projects in addition to the many commercial enterprise platforms that practice on OpenSocial, it serves as reference implementations for OpenSocial technology:

Apache Shindig: It is the reference implementation of OpenSocial API specifications, versions 1.0.x and 2.0.x. It is the standard set of Social Network APIs that constitutes profiles, relationships, activities etc

Apache Rave: It is a lightweight and open-standards-based extensible platform for managing, combining and hosting OpenSocial and W3C Widget related features, technologies, and services. 

How is OpenSocial contributing to society?

The Developers 

Social platforms are interactive and exercise notifications that are provided with the alerts. Making numerous social software to control social experience takes a lot of time and effort. Building a distribution is the answer to all of it. It allows the developers to build the best things, re-use it, expand and even improve on that. 

Site Owner and Business 

If you are using Opensource Saas offerings, you have the ability to use site codes and data anytime. Social media changed modern society and communications, especially in our private lives. The decentralized nature of social software is a huge opportunity for organizations to reinvent the way they communicate and collaborate

End Users 

End users obsess over user-centered design. Without engaged end users, no projects wouldn’t go anywhere. Thus providing the users with tools that are appealing and easy to use are a must for great user experience.

Why choose OpenSocial over any other software?

Freedom for the clients. If they need to download their SaaS platform and run or extend it as they want, then they can easily do it. 

Getting to this point from scratch takes longer and the core modules give you the functionality you need from the ground up.

The above points clearly say it is better software. With the Drupal community putting extra eyes on the code, making suggestions for design and development improvements, hopefully adding new features word-of-mouth marketing, and possibly some clients.
It provides easy customization options.

OpenSocial giving tough competition to other community software in the market

The pace of digitization is steadily increasing, leaving a lot of old processes behind in the dust. The same applies to traditional methods of innovation. The internet has not just become a hub to share knowledge, but also to create knowledge together through crowd innovation.

Some of the other community software in the market like lithium is being beaten hard by OpenSocial.

How?

Let’s find out 

  Lithium OpenSocial Who uses it? Businesses of all sizes looking to attract new visitors A better way of connecting with your members, volunteers, employees, and customers Free Trail Not Provided  Provided  Free Version Not Provided  Provided  Starting Price $235.00/month It is free Entry Level set up Not Provided  Provided  What does Drupal Community Gain From Open Social? Without Drupal distributions, we won't be able to successfully compete with commercial vendors. Drupal distributions have great potential.
-Dries Buytaert

With the help of Open Social distribution, the Drupal community has been provided with a platform for their social projects. A more sustainable and adopted way of development. OpenSocial is better with Drupal because:

Users can use Open Social for their own projects and clients.
They can give back to the open-source community.
If the user is a Drupal freelancer or professional then they can improve the Drupal.org standing.

Case Study on Youth4Peace 

The UN Security Council acknowledges the positive role played by all young women and men in preserving international security. The task force for Youth, Peace, and Security proposed an updated and expanded Youth4Peace platform. This was done in order to give inspired parties and partners a path to enable consistent and timely information.

The UNDP was already familiar with the features and functionalities of Drupal as the previous site was built on the same. The organization supports open-source mainly because of the reusability feature of modules. 

Moreover, the Drupal 8 community distribution, Open Social equals several goals of the project. Goals like: innovation and the use of technology. The distribution already included most of the needed features for the project, including blogs, events, profiles, information streams, a discussion engine, and moderation tools for community managers.

Therefore, The Youth4Peace portal was developed. It was constructed using an Agile method and mainly focused on:

  • A curated Knowledge Resource Library
  • Moderated e-Discussions & e-Consultations
  • Experts’ Profiles
  • News & Events and their overviews with filters

By being able to produce content for non-community members, the community was able to reach the global platform even at a bigger pace.
 


In The End 

Now we know that OpenSocial has the right blend of features that are needed to build a social community. The distribution proves to be an appropriate platform to start building a community or intranet with immense features.

Opensense Labs understands how important it is for every organization to stay connected with the world. Therefore, we are here to leverage all those facilities and services. Ping us at hello@opensenselabs.com now.

blog banner blog image Drupal Drupal 8 CMS OpenSocial Drupal community Social Media Platforms Drupal distributions Blog Type Articles Is it a good read ? On

OpenSense Labs: Drupal Distributions and Social Impact Platform

3 weeks ago
Drupal Distributions and Social Impact Platform Vasundhra Mon, 02/25/2019 - 21:50

We all have learned in our biology classes that genes are made up of DNA which gives instructions to the body to grow, develop and live. In other words, it is like a blueprint or like a recipe which guides an individual to do a particular task. 

Just like DNA is important to impact a human body, Drupal distributions are necessary to build and create a social impact platform for your projects and website. 


Social impact is and must be the primary goal and measure for every social initiative. Measuring social impacts urges organizations not to only focus on the economic or financial factor, but to access their influence across the environmental and social dimensions. 

How Building a Social Impact Platform With Drupal Distributions Do Well to a Project?

A distribution packages a set of contributed and custom modules together with Drupal core to optimize Drupal for a specific use and industry. Drupal distribution has evolved from an expensive lead generation tool to something which offers a service at a large scale. Some of the Drupal distributions like:

OpenSocial 

OpenSocial is a free Drupal distribution for constructing private social networks and an out of the box solution for online communities. Open Social is a distribution that is built in Drupal 8 to construct social communities and intranets. It is built in Drupal 8, and it wraps in itself in an array of possibilities leveraging the features of Drupal 8.

In the Drupal community, Open Social is placed as a successor of Drupal Commons. Drupal Commons is a Drupal 7 distribution that is an out of the box community collaboration website.
 

 

  • A case study on Pachamama  

Pachamama approves the inherent people of the Amazon rainforest to protect their lands, culture, educate and inspire people everywhere to bring forth a growing and sustainable world. Drupal was chosen for its flexibility and customizable features.

Drupal was appointed for its versatility and customizable features. For example, Pachamama grants an on- and offline ‘Awakening the Dreamer’ course. In the course module, the user can walk through a step-by-step course program and finish with video, text or an opportunity to keep track of the development and progress. To make this possible within the Pachamama Alliance platform integration of a course module into the Open Social platform was done.
 


Lightning 

A distribution developed and maintained by Acquia. This distribution provides a framework or starting point of Drupal 8 projects that require more advanced layouts.

The developers have been provided with hundreds of embedded automated tests that allow them to implement continual integrations pipelines. It controls major functionality, essentially granting a safe environment to innovate with their own custom code additions to Lightning.

 

  • A case study on Higgidy 

Higgidy is a thriving business, offering incredible high-quality food that is sold in supermarkets. Drupal 8 was chosen for this project based on numerous factors.

The potential for future upgrades to make commerce into the platform was also an engaging benefit, enabling the user to assure that they don’t end up with a fragmented tech stack divided across many platforms. 

Being mobile-driven was a core concern of the platform selection, and Drupal 8 presented with a seamless content experience every time.

One of the primary and most important decisions was to make use of the Lightning. This gave a great head start for a site of this nature, right out of the box, presenting some very important components and assuring that they are able to get going. The site was essentially powered by Views coupled with some custom serialization.
 


Opigno 

This is an Open source e-learning platform based on Drupal that enables the user to accomplish online training, skills of students and employees. Opigno is an open source e-learning platform that is based on Drupal. It allows the user to control online training, and efficiently guarantee that student, employee and partner skills remain up to date.

Opigno LMS is intended for Companies, Corporations, and Universities, looking for an e-learning solution that is flexible and is easily scalable.
 

Drupal Commerce 

Drupal Commerce is an open-source eCommerce software that augments the content management system Drupal. It helps in managing eCommerce websites and applications of all the sizes. 
This distribution also helps in enforcing strict development standards and leveraging the greatest features of Drupal 7 and major modules like Views and Rules for maximum flexibility.
 

 

  • Drupal Commerce helping the community: A case study on Sosense

Sosense supports entrepreneurs who address some of the most challenging social and environmental problems. Drupal was selected for this project because it was one of the most relevant frameworks that build a scalable platform. 

Sosense demand was to rebrand and redevelop their first, custom-developed, platform to develop technical scalability, usability, and interaction design. The project work was simple yet appreciatively challenging. One side it drew from our expertise in creating community- and fundraising solutions. On the other side, Sosense was one of the first complex sites to apply Drupal 7 where many important modules were still in dev status. 

Testing and debugging modules like Organic Groups, Drupal Commerce and i18n, required many unexpected hours of work. The agile project management approach allowed us to tackle some of the unexpected issues with frequent releases and constant client interaction. The project was delivered on time and to the full satisfaction of our client.
 


OpenChurch 

This is the distribution which is for churches and ministries.  A flexible platform with common features of the church helping them streamline development of the website. Some of the features of this distribution are:

  • Blog - It includes a list page and archive page, the blog content type is very easy and this does not use the core blog module.
  • Bulletin - Includes block for downloading latest bulletin, also a list page and content type.
  • Events - Includes an event content, filtered by the ministry and responsive calendar.
  • Gallery - Integrates with ministry content and is an easy way to manage galleries.
  • Giving - Includes list display for featured charities
  • Homepage Rotator - a very nice way to feature content on the homepage in a slideshow which is a very common feature on sites today.
  • Ministry - this represents a church's core ministries (Missions, Youth, etc.) and integrates with other content on the site.
  • Podcast - An out of the box sermon podcast page. Also includes a block for showing the most recent podcast. It is called labeled 'Sermons' but can be used for any kind of podcast.
  • Social - Social integration with Twitter, Facebook, Google+ and more! Enable visitors to share content with their social networks.
  • Staff - Includes staff page and integration with well with ministries.
  • Video - Add 3rd party video from Youtube and Vimeo

Presto

Presto includes pre-configured and ready to use right out-of-the-box functionalities. It consists of an article content type with some pre-configured fields and a basic page content type with a paragraphs-based body field. Some pre-configured paragraph types in this distribution are:

  • Textbox
  • Image
  • Promo bar
  • Divider
  • Carousel

Not only this but it also consists of a block which allows the embedding of Drupal blocks. This distribution has an article listing page which displays a paginated listing of articles, sorted by publish date.

Conclusion  

The advantages of working with a Drupal distribution continue well till date. Maintenance is also a breeze. When you create a website born out of distribution, all modules and features are integrated and tested together. When updates are required, it is a single update, as opposed to hundreds. Thus Drupal distribution for your social impact platform is what you need.

At opensense Labs we purely follow all the functionalities that come with the Drupal distribution. Contact us on hello@opensenselabs.com for more information on the same. Our services would guide you with all the instruction and information, that you require for the same. 

blog banner blog image Drupal Drupal 8 CMS OpenSocial Opigno LMS Drupal Commerce Open Church Lightning Drupal distribution Blog Type Articles Is it a good read ? Off

Gizra.com: Do-It-Yourself Stress Testing

3 weeks ago

Earlier we wrote about stress testing, featuring Blazemeter where you could learn how to do crash your site without worrying about the infrastructure. So why did I even bother to write this post about the do-it-yourself approach? We have a complex frontend app, where it would be nearly impossible to simulate all the network activities faithfully during a long period of time. We wanted to use a browser-based testing framework, namely WebdriverI/O with some custom Node.js packages on Blazemeter, and it proved to be quicker to start to manage the infrastructure and have full control of the environment. What happened in the end? Using a public cloud provider (in our case, Linode), we programmatically launched the needed number of machines temporarily, provisioned them to have the proper stack, and the WebdriverI/O test was executed. With Ansible, Linode CLI and WebdriverIO, the whole process is repeatable and scalable, let’s see how!

Infrastructure phase

Any decent cloud provider has an interface to provision and manage cloud machines from code. Given this, if you need an arbitrary number of computers to launch the test, you can have it for 1-2 hours (100 endpoints for a price of a coffee, how does this sound?).

There are many options to dynamically and programmatically create virtual machines for the sake of stress testing. Ansible offers dynamic inventory, however the cloud provider of our choice wasn’t included in the latest stable version of Ansible (2.7) by the the time of this post. Also the solution below makes the infrastructure phase independent, any kind of provisioning (pure shell scripts for instance) is possible with minimal adaptation.

Let’s follow the steps at the guide on the installation of Linode CLI. The key is to have the configuration file at ~/.linode-cli with the credentials and the machine defaults. Afterwards you can create a machine with a one-liner:

linode-cli linodes create --image "linode/ubuntu18.04" --region eu-central --authorized_keys "$(cat ~/.ssh/id_rsa.pub)" --root_pass "$(date +%s | sha256sum | base64 | head -c 32 ; echo)" --group "stress-test"

Given the specified public key, password-less login will be possible. However this is far from enough before the provisioning. Booting takes time, SSH server is not available immediately, also our special situation is that after the stress test, we would like to drop the instances immediately, together with the test execution to minimize costs.

Waiting for machine booting is a slightly longer snippet, the CSV output is robustly parsable:

## Wait for boot, to be able to SSH in. while linode-cli linodes list --group=stress-test --text --delimiter ";" --format 'status' --no-headers | grep -v running do sleep 2 done

However the SSH connection is likely not yet possible, let’s wait for the port to be open:

for IP in $(linode-cli linodes list --group=stress-test --text --delimiter ";" --format 'ipv4' --no-headers); do while ! nc -z $IP 22 < /dev/null > /dev/null 2>&1; do sleep 1 done done

You may realize that this is overlapping with the machine booting wait. The only benefit is that separating the two allows more sophisticated error handling and reporting.

Afterwards, deleting all machines in our group is trivial:

for ID in $(linode-cli linodes list --group=stress-test --text --delimiter ";" --format 'id' --no-headers); do linode-cli linodes delete "$ID" done

So after packing everything in one script, also to put an Ansible invocation in the middle, we end up with stress-test.sh:

#!/bin/bash LINODE_GROUP="stress-test" NUMBER_OF_VISITORS="$1" NUM_RE='^[0-9]+$' if ! [[ $NUMBER_OF_VISITORS =~ $NUM_RE ]] ; then echo "error: Not a number: $NUMBER_OF_VISITORS" >&2; exit 1 fi if (( $NUMBER_OF_VISITORS > 100 )); then echo "warning: Are you sure that you want to create $NUMBER_OF_VISITORS linodes?" >&2; exit 1 fi echo "Reset the inventory file." cat /dev/null > hosts echo "Create the needed linodes, populate the inventory file." for i in $(seq $NUMBER_OF_VISITORS); do linode-cli linodes create --image "linode/ubuntu18.04" --region eu-central --authorized_keys "$(cat ~/.ssh/id_rsa.pub)" --root_pass "$(date +%s | sha256sum | base64 | head -c 32 ; echo)" --group "$LINODE_GROUP" --text --delimiter ";" done ## Wait for boot. while linode-cli linodes list --group="$LINODE_GROUP" --text --delimiter ";" --format 'status' --no-headers | grep -v running do sleep 2 done ## Wait for the SSH port. for IP in $(linode-cli linodes list --group="$LINODE_GROUP" --text --delimiter ";" --format 'ipv4' --no-headers); do while ! nc -z $IP 22 < /dev/null > /dev/null 2>&1; do sleep 1 done ### Collect the IP for the Ansible hosts file. echo "$IP" >> hosts done echo "The SSH servers became available" echo "Execute the playbook" ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' -T 300 -i hosts main.yml echo "Cleanup the created linodes." for ID in $(linode-cli linodes list --group="$LINODE_GROUP" --text --delimiter ";" --format 'id' --no-headers); do linode-cli linodes delete "$ID" done Provisioning phase

As written earlier, Ansible is just an option, however a popular option to provision machines. For such a test, even a bunch of shell command would be sufficient to setup the stack for the test. However, after someone tastes working with infrastructure in a declarative way, this becomes the first choice.

If this is your first experience with Ansible, check out the official documentation. In a nutshell, we just declare in YAML how the machine(s) should look, and what packages it should have.

In my opinion, a simple playbook like this below, is readable and understandable as-is, without any prior knowledge. So our main.yml is the following:

- name: WDIO-based stress test hosts: all remote_user: root tasks: - name: Update and upgrade apt packages become: true apt: upgrade: yes update_cache: yes cache_valid_time: 86400 - name: WDIO and Chrome dependencies package: name: "" state: present with_items: - unzip - nodejs - npm - libxss1 - libappindicator1 - libindicator7 - openjdk-8-jre - name: Download Chrome get_url: url: "https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb" dest: "/tmp/chrome.deb" - name: Install Chrome shell: "apt install -y /tmp/chrome.deb" - name: Get Chromedriver get_url: url: "https://chromedriver.storage.googleapis.com/73.0.3683.20/chromedriver_linux64.zip" dest: "/tmp/chromedriver.zip" - name: Extract Chromedriver unarchive: remote_src: yes src: "/tmp/chromedriver.zip" dest: "/tmp" - name: Start Chromedriver shell: "nohup /tmp/chromedriver &" - name: Sync the source code of the WDIO test copy: src: "wdio" dest: "/root/" - name: Install WDIO shell: "cd /root/wdio && npm install" - name: Start date debug: var=ansible_date_time.iso8601 - name: Execute shell: 'cd /root/wdio && ./node_modules/.bin/wdio wdio.conf.js --spec specs/stream.js' - name: End date debug: var=ansible_date_time.iso8601

We install the dependencies for Chrome, Chrome itself, WDIO, and then we can execute the test. For this simple case, that’s enough. As I referred to earlier:

ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' -T 300 -i hosts main.yml

What’s the benefit over the shell scripting? For this particular use-case, mostly that Ansible makes sure that everything can happen in parallel and we have sufficient error-handling and reporting.

Test phase

We love tests. Our starter kit has WebdriverIO tests (among many other type of tests), so we picked it to stress test the full stack. If you are familiar with JavaScript or Node.js the test code will be easy to grasp:

const assert = require('assert'); describe('podcasts', () => { it('should be streamable', () => { browser.url('/'); $('.contact .btn').click(); browser.url('/team'); const menu = $('.header.menu .fa-bars'); menu.waitForDisplayed(); menu.click(); $('a=Jobs').click(); menu.waitForDisplayed(); menu.click(); $('a=Podcast').click(); $('#mep_0 .mejs__controls').waitForDisplayed(); $('#mep_0 .mejs__play button').click(); $('span=00:05').waitForDisplayed(); }); });

This is our spec file, which is the essence, alongside with the configuration.

Could we do it with a bunch of requests in jMeter or Gatling? Almost. The icing on the cake is where we stress test the streaming of the podcast. We simulate a user who listens the podcast for 10 seconds. For for any frontend-heavy app, realistic stress testing requires a real browser, WDIO provides us exactly this.

The WebdriverIO test execution - headless mode deactivated Test execution phase

After making the shell script executable (chmod 750 stress-test.sh), we are able to execute the test either:

  • with one visitor from one virtual machine: ./stress-test.sh 1
  • with 100 visitors from 100 virtual machines for each: ./stress-test.sh 100

with the same simplicity. However, for very large scale tests, you should think about some bottlenecks, such as the capacity of the datacenter on the testing side. It might make sense to randomly pick a datacenter for each testing machine.

The test execution consists of two main parts: bootstrapping the environment and executing the test itself. If bootstrapping the environment takes too high of a percentage, one strategy is to prepare a Docker image, and instead of creating the environment again and again, just use the image. In that case, it’s a great idea to check for a container-specific hosting solution instead of standalone virtual machine.

Would you like to try it out now? Just do a git clone https://github.com/Gizra/diy-stress-test.git!

Result analysis

For such a distributed DIY test, analyzing the results could be challenging. For instance, how would you measure requests/second for a specific browser-based test, like WebdriverI/O?

For our case, the analysis happens on the other side. Almost all hosting solutions we encounter support New Relic, which could help a lot in such an analysis. Our test was DIY, but the result handling was outsourced. The icing on the cake is that it helps to track down the bottlenecks too, so a similar solution for your hosting platform can be applied as well.

However what if you’d like to somehow gather results together after such a distributed test execution?

Without going into detail, you may study the fetch module of Ansible, so you can gather a result log from all the test servers and have it locally in a central place.

Conclusion

It was a great experience that after we faced some difficulty with a hosted stress test platform; in the end, we were able to recreate a solution from scratch without much more development time. If your application also needs special, unusual tools for stress-testing, you might consider this approach. All the chosen components, such as Linode, WebdriverIO or Ansible are easily replaceable with your favorite solution. Geographically distributed stress testing, fully realistic website visitors with heavy frontend logic, low-cost stress testing – it seems now you’re covered!

Continue reading…

OpenSense Labs: Ready to Espouse OpenSource Campus Management?

3 weeks 1 day ago
Ready to Espouse OpenSource Campus Management? Vasundhra Sun, 02/24/2019 - 17:04

Education is just like planting a seed. A seed that has different stages with significant and important roles. If any stage is missed in the entire scenario, it destroys the life cycle of the plant. 

It is not what is poured into a student that counts, but what is planted
-Linda Conway  

There is no secret to the fact that education has the power to change lives. For a successful career, every student needs to go through the learning stage of knowledge, confidence, academics and technical skills so that they can grow efficiently. A college education is one such element that contributes highly to these steps of learning. 

Therefore, to achieve these steps of knowledge, campus management software has been introduced. 


OpenSource Campus Management solution is one such management software which has made lives easy for students, teachers, authorities and other people that follow down the chain. Such a system has brought standardization and steadiness within the organization. 

But what exactly is OpenCampus?

OpenCampus is a technical model that contributes highly to the outlook and the network of the universities. It was developed with the first open adoption solution of campus management in Drupal. 

OpenCampus is designed to cover the life cycle of students.

In Germany and Austria, more than 30 universities are using this software and it is highly contributing to their needs and requirements. 

With the help of OpenCampus software, you can manage everything. Starting from all the courses till recording achievements, the application does everything and is considered among the most versatile applications. It allows mapping of complex procedure which includes the allocation of the student into smaller classes in medical or dentistry programs. 

The Framework

The framework of OpenSource is based on the open source technology, Drupal, and it lets their customers create their own applications with a smooth integration of third-party products such as a moodle. 

Features provided by OpenCampus
  Features Benefits Application and Admissions
  • Transparent and multi-staging application process.
  • Dynamic list of view.
  • Automatic e-mail notification
  • Smart forms
Course and Event Management
  • Parallel small groups.
  • Automation of complex course sequence 
  • Uploading of documents, term papers, and personal calendar
Exam Management and validation
  • Exam questions 
  • Written tests and online evaluation
  • Seating plans
Records of Achievements
  • Easy modifications following revision of the examination.
  • Automatic generation of course certificate
  • List of synopsis 
Evaluation
  • Evaluation via app
  • Flexible configuration
  • Automatic evaluation report 
Mobile apps and Platforms
  • Integration of students and faculty 
  • Forums and online discussions
  • Attendance


Application and Admission

The process involving applications as well as admissions have been made really simple with the help of OpenCampus. The software presents the applicants with a simple tool that uploads and manages all the necessary information in one single place. 

Course and Event Management 

OpenCampus software is one of the most powerful and flexible of its kind. The module handles simple seminars with the location, automates complex courses, appointment, and lecturer. It also supports multilevel architectures with multi-language pages and directs the budget control. 

Exam management

The software is an innovative web-based solution that grants users with extensive functionalities for creating a multi-level architecture of any exam. All the aspects of exam preparation are managed seamlessly with OpenCampus ( starting from an online mock test to the seating arrangement)

Records of Achievements 

OpenCampus performance management tells the whole study achievements of the students in a clear view. The data of the other modules such as "OpenCampus Exams" and "OpenCampus event management" are also stored in this location. Easy modifications in the revision of the examination, automatically generating course certificate and the listing of synopsis are some of the features that are offered under OpenCampus

Evaluation

Continuous and seamless evaluation is the key to ensure the quality of teaching and offers that are present by a university. The user can evaluate standardized courses and receive qualitative as well as quantitative feedback on different areas of teaching. The user can benefit from simple creation option of questionaries or reports as full integration of course management is done in the system. 

Mobile Application 

The OpenCampus software has special support which is "Room Management". The users can manage their booking of event and laboratory rooms and their equipment. As the software is mobile responsive, it makes it even more efficient and handy. 


The reason why customers choose OpenCampus 

Higher education institutes are bound with various responsibilities and data information that has to be managed accurately and in complete synchronization. OpenCampus here bags all the trophies by providing them with the administration of the students and faculty. There are also many reasons why OpenCampus is chosen by universities. Some of the reasons are:

  • It presents with unique processing mapping: OpenCampus is the only software that manages complex processes of the universities. 
  • It comes with comprehensive feature sets: OpenCampus software offers extensive functionalities and features to its customers. 
  • Open Adaptive System: OpenCampus is an adaptive system that has additional modules that can easily be added anytime on the openSource platform.
  • Established and Experienced: More than 25 universities are using OpenCampus that have at least 3,000 students. 
OpenCampus for Research Data Management System For Clinical Studies

Research institutes need to manage multiple studies with individual data sets, processing rules and different type of permissions. But there are no “official” or “standard” technology that presents an easy to use the environment to construct database and user interface for the clinical trials or the research studies. Thus, many software solutions were being used which were explicitly made for a specific study, to cost-intensive commercial Clinical Trial Management Systems (CTMS)

With OpenCampus Research, Open adoption software (OAS) solution provided the users with a standard environment for state-of-the-art research database management at a very low cost.

The architecture of the open adoption software (OAS) allows the user after a brief instruction to develop their own web-based data management system.

The implementation provided with the following features:

  • Basic Architecture

OpenCampus is basically three types: forms, trees, and containers. Any type of research project or clinical trial can always be mapped with this model and are fully configurable through the graphical user interface 
 

Source: National Center for Biotechnology Information

 

  • Interoperability

There are many taxonomies that allow the user to classify content with terms gathered within the vocabularies. With the help of taxonomies, the field contents are able to store not just as text, but also as a reference that is linked to the predefined value.

  • Multicenter 

The approach of OpenCampus software works really well under this section. There is one single study administrator that assigns permissions to center coordinators. Center coordinators then independently distribute access and permissions to the data managers that are responsible for entering the data.

The multicenter concept can be extended with various additional features such as node state levels or individual data processing guidelines that ensure that certain quality management actions are executed during data processing

  • Meta-analysis

One core element of this data storage approach in the OpenCampus OAS concept is that it allows the nodes to get connected to each other. The link between these nodes is called entity reference. With the help of entity references, the data from many studies can be combined (merged), enabling meta-analysis to be executed just by creating a new output view.

  • Data Security  

The two major solution in terms of security is that the customer can fill online form or the information can be submitted on premises along with the confidentiality of doctor-patient.

Thus, with the help of OpenCampus system, a steady environment was provided to the research center and the people working in it alongside with database design and pattern design. 

Conclusion

OpenCampus is not only that software which is used for the small clerical task, but it is also beyond that as it offers three-way interactive platform for students, teachers, and parents. It not only saves the time of the administrative staff and their pupils, but it also allows them to pay fees online and makes them attentive about important information around the university. 

Opensense Labs believes that the contemporary system of education will spread a new level of superiority in the education sector. Ping us at hello@opensenselabs.com to know more about OpenSource campus management. The services provided by our organization would help you solve all your queries.

blog banner blog image Drupal Drupal 8 CMS OpenCampus OpenSource Campus Management Data Management Course Management Blog Type Articles Is it a good read ? On

OpenSense Labs: Essential Drupal modules for synchronising content

3 weeks 2 days ago
Essential Drupal modules for synchronising content Shankar Sat, 02/23/2019 - 14:45

The story of a historical character acquires a plethora of accretions over the centuries. So, we have numerous incidents and episodes in his or her life but not the complete picture. So, representing historical characters on stage could lead to a fractured narrative. There has to be synchronisation with the pre-recorded dialogue and it should not distract the actors from emoting.


The synchronisation is also of great significance in the digital scene. Content publishing in Drupal is of utmost importance with the expanding possibilities of content creation itself. It is even more crucial when it has to be synchronised between Drupal sites. Why is content synchronisation needed in Drupal?

Need for Content Synchronisation

Drupal 8 offers numerous tools for streamlining content creation and moderation. For instance, the Content Moderation module lets you expand on Drupal’s ‘unpublished’ and ‘published’ states for content and enables you to have a published version that is live and have a different working copy that is undergoing assessment before it is published.

In case, you need to share content or media across multiple sites, different solutions are available in Drupal that comes with content synchronisation capabilities to assist you to keep the development, staging and production in superb sync by automating the safe provisioning of content, code, templates and digital assets between them. Multiple synchronisation tasks can be created and scheduled for automating their occurrence in the future targeting different destinations servers and sites. Or, synchronisation tasks can be manually performed via the user interface.

Following are some of the major modules that are worth considering for content synchronisation necessities:

Deploy - Content Staging


The Deploy module enables users to easily stage content from one Drupal site to another and automatically governs dependencies between entities like node references. Its rich API is extensible that helps in different content staging situations. It is great for performing cross-site content staging. Using Deploy with RELAXed Web Services helps in staging content between different Drupal websites. It, also, works with Workspace module for offering workspace preview system for single site content staging. And the API offered by RELAXed Web Services is spectacular for building fully decoupled site. With Multiversion, all content entities can be revisioned. Workbench Moderation ensures that when you moderate a workspace, content is replicated automatically when approved.

Entity Share


Entity Sync module enables you to share entities like node, field collection, taxonomy, media etc. between Drupal instances. It lets you share entities with the help of JSON API and offers a user interface for leveraging endpoints provided by JSON API module.

CMS Content Sync


CMS Content Sync module offers content synchronisation functionalities between Drupal sites with the help of a NodeJS based Sync Core. The synchronisation of an enormous amount of data consisting of content and media assets is possible with the help of this module that can’t be managed by Drupal itself. It is wonderful for content staging as it enables you to test code updates with your content and publish code and content concurrently. It manages the publishing so that you can have a complete focus on the creation of content. It also offers content syndication functionalities and your entire content can be updated and deleted centrally. Moreover, it allows you to connect any of your sites to a Content Pool that lets you push your content and media items and the remote sites can be allowed to import that content easily.

Content Synchronisation

Exporting single content item or a number of content items from an environment to another efficaciously is possible with the help of Content Synchronisation module. That means you can export and import full site content. Or, you can export and import a single content item. The difference between site content and the one in YAML files can be viewed. Also, entities can be imported with a parent/child relationship.

Acquia Content Hub


The distribution and discovery of content from any source in order to a fantastic multi-channel digital experiences can be done using Acquia Content Hub module. It enables you to connect Drupal sites to the Acquia Content Hub service. Acquia Content Hub, which is cloud-based, centralised content dissemination and syndication solution, lets you share and enrich content throughout a network of content sources using extensible, open APIs.

Conclusion

These are some of the most significant solutions available in the enormous list of Drupal modules that are specifically built for enhancing synchronisation of content between Drupal sites.

We have been constantly working towards the provisioning of marvellous digital experiences with our expertise in Drupal development.

Let us know at hello@opensenselabs.com how you want us to help you build innovative solutions using Drupal.

blog banner blog image Content Synchronisation Content Syndication Content Staging Drupal 8 Drupal module Blog Type Articles Is it a good read ? On

OpenSense Labs: Demystifying BackstopJS

3 weeks 2 days ago
Demystifying BackstopJS Shankar Sat, 02/23/2019 - 14:17

A scientist can be rewarded a Nobel prize for some amazing scientific breakthrough made in his or her research. Most often than not, the leading scientist is backstopped in the research by a team of hugely talented assistants.


Talking about backstopping, you do need something as a backstop even in the digital landscape. When you make an alteration to your website, you have to be sure if it is devoid of unintended side effects. This is where BackstopJS comes into light. As an intuitive tool, it enables swift configuration and helps you get up and rolling quickly.  Before we look at how it can be leveraged with Drupal, let’s dive deeper into visual regression testing and BackstopJS.

Traversing visual regression testing and BackstopJS

Visual regression testing emphasises on identifying visual alterations between iterations or version of a site. In this, reference images are created for every component as they are created which enables a comparison over time for monitoring alterations. Developers can run this test on his or her local development environment after the alterations are performed to make sure that no regressions issues transpire because of the changes.

Visual regression testing emphasises on identifying visual alterations between iterations or version of a site.

Visual regression testing is hugely beneficial in enabling the developers to get a test coverage that is more than what they could do manually thereby ensuring that alterations do not cause regression impact on other components. It has the provision for enhanced detail comparison than what you would get while reviewing the site manually. Even without a full understanding of the project, developers know what the website should look like before the alterations.


BackstopJS, an open source project, is a great tool for performing visual regression testing. It is leveraged for running visual tests with the help of headless browsers to capture screenshots. It was created by Garris Shipon and was originally ran with the help of PhantomJS or SlimerJS headless browser libraries. It supports screen rendering with Chrome-headless and you can add your own interactions using Puppeteer and ChromyJS scripting.

BackstopJS is leveraged for running visual tests with the help of headless browsers to capture screenshots

It offers an excellent comparison tool for identifying and highlighting differences and lets you set up several breakpoints for testing responsive sites. Moreover, it utilises simple CSS selectors for identifying what to capture. It provides in-browser reporting user interface like a customisable layout, scenario display filtering, etc.  Furthermore, it comes with integrated Docker rendering and works superbly with continuous integration and source control. Also, it gives you CLI and JUnit reports.

Workflow and installation

It’s pretty easy to install and configure BackstopJS which involves:

Installation (global): npm install -g backstopjs
Installation (local): npm install --save-dev backstopjs
Configuration: backstop init (creates backstop.json template)

Following is the workflow of BackstopJS:

  • backstop init: A new BackstopJS instance is set up by specifying URLs, cookies, screen sizes, DOM selectors, interactions among others
  • backstop test: By creating a set of test screengrabs and comparing them with your reference screengrabs, you can check the alterations that show up in a visual report.
  • backstop approve: If all looks fine after the test, you can approve it
BackstopJS with Drupal

A Drupal Community event in 2018 talked about a Drupal 8 module called Backstop Generator for creating backstop.json configuration files on the basis of site’s unique content.


This Drupal 8 module exposes an administrative configuration form to create a BackstopJS visual testing profile on the basis of Drupal website’s content. It assists you in creating backstop scenarios from Drupal pages and defining random pages for including as scenarios. You can toggle on and off viewport sizes. It results in a backstop.json file that requires you to place that into a backstop directory and the existing backstop.json file is replaced. It requires the services of Serialisation, HAL and REST modules. It is worth noting that this module is not covered by Drupal’s security advisory policy.

Backstop Generator can do a lot more. You can contribute towards building more interesting features and join the issue queue on Drupal.org to submit a patch or report a bug.

Conclusion

Deploying frontend alterations can be troublesome. Visual regression testing software like BackstopJS ensures that our changes are accurate and contained and can be great for Drupal sites.

We have been offering a suite of services to help digital firms fulfil their dreams of digital transformation.

Contact us at hello@opensenselabs.com and let us know how you want us to be part of your digital transformation plans.
 

blog banner blog image BackstopJS Backstop Generator Drupal 8 Blog Type Articles Is it a good read ? On

OpenSense Labs: Integrating R with Drupal

3 weeks 2 days ago
Integrating R with Drupal Shankar Sat, 02/23/2019 - 13:33

R programming can be an astronomical solution for foretelling re-booking rates by leveraging previous guest ratings and, thereby, automating guest/host matching. That’s exactly what analysts at Airbnb, an online marketplace and hospitality service provider, has done with it. It has leveraged R for driving numerous company initiatives with the help of an internal R package called Rbnb.


As a growing organisation, Airbnb’s inclination towards R for enhancing its business operations speaks volumes of this programming language. It offers powerful analytics, statistics and visualisations. In combination with Drupal, R programming language can be a spectacular option for an innovative solution. Let’s take a look at the origins of R and its significance before heading over to the integration of Drupal and R programming.

A sip of history

Microsoft delineates that strolling along history of R would take us back to ‘90s when it was implemented by Robert Gentleman and Ross Ihaka (faculty members at the University of Auckland). It was commenced as an open source project in 1995. R Core Group handled this project from 1997. The first version was released in the year 2000. It was closely modelled on the S language for Statistical Computing.

R programming: An exploration


R, as an open source programming language and software environment, is magnificent for statistical computing and graphics. Prominently, it is used by data scientists and statisticians alike. It has the support of R Foundation for Statistical Computing and a huge community of open community developers. Those who are accustomed to using GUI focussed programs like Statistical Package for Social Sciences (SPSS) and Statistical Analysis System (SAS) can find it difficult in the nascent stages of using R as it leverages a command line interface. In this case, RStudio can be meritorious.

R is a language and environment for statistical computing and graphics. - r-project.org

R offers a wide array of statistical techniques like linear and non-linear modelling, classical statistical tests, time-series analysis, classification, clustering among others. In addition to this, it also provides graphical techniques and is very extensible.

R has had a remarkable growth which can be seen through the following figure.

Source: Stack Overflow

Even the Redmonk programming language rankings, that compared languages’ appearances on Github (usage) and StackOverflow (support), R maintains its position near the top.


Academics, healthcare and government segments maintain the preeminence when it comes to the industries that visit the R questions the most in U.S. and U.K.

Source: Stack Overflow

The Burtch Works survey shows that R is a second choice for data scientists and its flexibility makes it great for predictive analytics. Research practitioners, who want to do both sorts of analysis and concurrently implement machine learning to their marketing research processes in future, will find R as a great option.

Source: NebuWhy choose R?

Following are some of the pivotal reasons that state the importance of R:

Documentation

Online resources for R like message boards are superbly supported and well documented.

Analysis

R helps in performing data acquisition, cleaning and analysis all in one place.

Data Visualisation

R has fantastic tools for data visualisation, analysis and representation.

Figure: A ggplot2 result | Source: IBM

This is a good example of data visualisation provided by R package ggplot2. In this, it displays the intricate relationship between yards per game and touchdowns as well as first downs.

Easy to learn

You can quickly get to speed as R has an easy learning curve. For instance, with the simplicity of the language, you can easily create three samples and create a bar graph of that data.

Figure: Three random samples | Source: IBMMachine learning

It makes machine learning a lot more easy and approachable with a superabundance of R packages for machine learning like MICE (for taking care of missing values), CARET (for classification and regression training) among others.

Drupal in the picture

A digital agency integrated Drupal with R for an insurance company that envisions itself offering a highly personalised matching service to assist people for selecting the right insurance program. The company leveraged R for building an algorithm that can calculate the compatibility score. The main notion was to be able to efficaciously match prospective customers to the insurance carrier based on customer’s needs and preferences.

The company leveraged R for building an algorithm that can calculate the compatibility score to be able to efficaciously match prospective customers to the insurance carrier based on customer’s needs.

The intricacies involved during the calculation of the compatibility score that is a function of elements like user persona, price sensitivity among others. So, numerous unique customer personas were found in the process like demographic factors (gender, age, education etc.) and car ownership details (car type, mileage etc.). Once a prospect is found to be a particular persona, it is, then, mapped to each of the insurance providers on the basis of a score and customer satisfaction survey ratings.
 
Moreover, different scores are calculated for tracking sensitivity of the customer to price, customer service levels, etc. This is done through a web service that links to the insurance carriers’ information and offers a quote for the customer on the basis of numerous information points provided by him. Finally, consolidation of all these scores into two different parameters give a final score that helps the prospect to select the right insurance carrier.

An insurance portal was built in Drupal which was the customer persona questionnaire that prospects can fill to be able to search the best carrier for their spending styles and other preferences. Once the information is entered by the prospect, it is passed from Drupal to R in the JSON format. R ensures that a plethora of customer data already exist and are also leveraged by the algorithm developed. Quotes pulled from multiple carriers are processed via web service. On the basis of the algorithm, R processes all the data and sends back the best carrier match options for the prospect who can, then, select the carrier on the basis of his preferences.

Conclusion

R, along with Drupal, is a marvellous option for the data experts to do everything ranging from the mapping of social and marketing trends online to building financial and climate models for giving that extra push to our economies and communities.
 
We have been offering a suite of services for fulfilling the digital transformation endeavours of our partners.
 
Let us know at hello@opensenselabs.com how you want us to be a part of your digital innovation plans.
 

blog banner blog image R R Programming R Programming Language R Language Drupal 8 Blog Type Articles Is it a good read ? On
Checked
30 minutes 29 seconds ago
Drupal.org - aggregated feeds in category Planet Drupal
Subscribe to Planet Drupal feed