Sean Reiser

Hi I'm Seán Reiser, this is my Personal Blog

#NewYorker #DrupalDeveloper #InfoSec #Photographer #GEEK #Whovian #MYSTie #LetsGoYankees #LongSufferingJetsFan #NAKnight #Quinquagenarian #CommitAwesome


I’ve commented on this blog about the trend recruiters have where they ask for the last 4 digits of a candidate’s SSN in their first contact email (Along with name, dob, location and other PII).  I thought I’d consolidate my thoughts on a post.

Let me explain the format of a SSN:

The first 3 digits are tied to the state where the applicant applied for their SSN.  Since most people in the US are born, live and die within a 50 mile radius this becomes guessable.

The next 2 are a group number and can be tied to the year the applicant applied for their SSN.  States only issue from a few group numbers a year.  Most Americans are issued their SSN in the first couple of years of their life.   Where a recruiter doesn’t know a candidate’s age / year of birth, it can estimated  from info on a candidate’s resume (graduation year or when the candidate entered the workforce).

The last 4 are assigned in sequence while isn’t really derivable from a candidate’s information.

So for a large portion of candidates, someone can whittle SSNs down to 1-100 possible options.  You can see when I am concerned that this could be a fishing attack.  Someone I don’t know asking for information that can lead to identity theft.  Also the thought that you’re submitting the info via email which is insecure by design which adds another vector for possible theft..

I understand that firms are using Candidate Tracking Systems where the last 4 digits of a SSN are used to ensure candidates don’t get double submitted, but there are risks involved that I think many people are unaware of.


This site has a linkblog and I thought I'd do a quick writeup on how I capture the links and their metadata.  You'll notice the the links are displayed in cards, similar to what you see on social media sites such as Facebook and Twitter.

This writeup will cover:

  1. The conten type to store a link and its metadata,
  2. Creating a bookmarklet so you can easily add a story to your site as you surf the web.
  3. Scraping metadata on a webpage to get the image, site name, title and description.

This writeup assumes that you have some basic understanding of Drupal on a site builder level.  I'm assuming you understand basic administration tasks such as creating content types, and fields as well as how to create a module,.  

Although I wrote the code for Drupal 9, as I review it, I see no reason that it won't work for Drupal 8.   Since support is ending for Drupal 8, you should be upgrading to Drupal 9, but that's a different matter.

Creating The Content Type

You'll need a content type to house the links.  On this site I'm using my generic note type which I use for most of my blog posts (this allows me to add a link to any post).  But I assume you want to use a separate content type, let's create a content type named link.  In addition to the standard title and body fields you want to give it the following fields::


Field Name

Field Type




OG Link Description


Text (plain, long)

OG Link Image Url


Text (plain, long)

OG Link Site Name


Text (plain)

OG Link Title


Text (plain)



Text (plain, long

HTTP Status Code


Text (plain, long)


Of course you can just add these fields to an existing content type as I did, you'll just need to adjust the code as you go forward.

Building the Bookmarklet

Simply put, a bookmarklet is a browser bookmark that contains JavaScript.  We're going to create one that will open a node add form with the url of the current page already prepopulated into the link field.  This saves you the effort of copying the current URL, opening your site, navigating to the node add form and pasting in the url of the page you want to blog.  There are 2 things we need ti do to make this work:

First we'll  get Drupal to accept a parameter on the node add form's URL and prepopulate the link field.  We either need to create a new module or use a module that you already use for glue code and use hook_form_alter.

This code basically says, "when loading the node add page for the link content type, look to see if there is a 'link' query string, and if there is, put the contents of the query string into field_link."

Next we need to get the query string into the URL .... that's where the bookmarket comes into play.  Here's a little javascript

You need to replace "" with your site's URL.  Just add a bookmark in your browser, call it something like "Add Link To My Site" and paste the javascript in as the link.  Add the bookmark to the favorites bar and when you're on a page that you want to blog about, click on the button, add any commentary in the body field and rock and roll.

There is a contributed module, Prepopulate which accomplishes the sane thing (and more) but is a little more overhead than the couple of lines of code I wrote here.  Plus, if we use contrib for the easy things, we'll never learn anything.

Fetching Metadata

Next we need to fetch the image url, site name, title and description.   You can either scrape the content for metadata server side at save or client side at when rendering the page.  I prefer doing it at save since doing it  doing it at client side will slow down page loads.  Of course, since you're caching the information, if the site changes any of the metadata, your site will be out of date.

Instead of writing code to parse out the metadata, I took advantage of opengraph.php, a library that does the heavy lifting,  Very simply, I used hook_ENTITY_TYPE_presave to populate the appropriate fields.  You can put this in the same module from above:

This loads the open graph library, loads the page info a variable and pass the page to the library to find the  metadata and then add it to the node before it's saved.


Politics aside… I’ve done a lot of work in Drupal, many amazing things. To be honest the expression “if you have a hammer everything looks like a nail” can apply to me and Drupal. If I were building a twitter like social network, the last thing I’d do is think Drupal, especially with platforms like Mastodon out there.

I’m in the middle of a Drupal 8 -> Drupal 9 migration.  After the data migration  I decided to nose around the database and found a number of tables that I believe are vestigial and should’ve been dropped a while ago:

  • Migration Tables - (migrate_map_upgrade_d7_*). tables that were used when I migrated this Drupal 7 into D8 5 years ago.
  • Empty Field Tables (node__field_*)  and Node Field Revision Tables  (node_revision__field_*)  that were from fields that were deleted years ago, generally from deleted content types.  This also shows up for paragraph, taxonomy and media bundles.

The fact that this stuff lingers around surprised me.  In my database management days, cruft always bothered me.  Is there any danger in clearing this out by hand (dropping tables)?  Do any developers monitor these things?


Happy Drupal Security Update Day!

Everyone should take note : [after the update] "Additionally, it's recommended that you audit all previously uploaded files to check for malicious extensions. Look specifically for files that include more than one extension, like .php.txt or .html.gif"

"find . -name \*.html.\*" and "find . -name \*.php.\*” should do it.

Good Luck, be safe and
May the Force Be With You!


So, the other day on ABC News Rahm Emanuel was discussing layoffs in the retail sector and suggested the Biden Administration,  ”Give them the tools [and say] 'Six months, you're going to become a computer coder. We'll pay for it.' And you'll get millions of people to sign up for that. They're not going back to parts of the retail economy and we need to give them a lifeline to what's the next chapter."

This is why I hate the term 'coder'.  It takes Application Development and reduces it to its simplest part.  Application Development is a craft, like carpentry.  Reducing it to just coding would be like reducing carpentry to hammering and telling people that they just need to learn to hammer.  The reason we have failures like the first Obamacare* site is this myth that anyone can be taken off the street and be competent in 6 months.  I’m 51, been writing code since I was 13 and I’m still learning everyday.

I believe in retraining.  I believe that some portion of retail employees might be able to become developers.  But the “teach them to code” mantra is downright insulting to me and anyone who takes this craft seriously.

*I mention the Obamacare site not as a knock on the Ex-President, just because it was one of the biggest blunders I could think of.


So, I have a dropdown menu on my blog page and I wanted to have the title of the menu be the link to the current page.  I'm using a standard bootstrap3 dropdown for the menu.  Since this is a Drupal 8 site, drupal thankfully adds a class of is-active to the active a tag.  The following code is chunked out:

In order to swap out the title I added the following Js

When you go to /blog/life, the menu title is replaced by "Life".

Cross-posted from Stack Exchange.

So, I’m doing some delayed maintenance on this site.  You know, removing modules I don’t need anymore, cleaning up taxonomy, etc, etc.  Technical debt on this site because you know what they say about cobbler’s children and shoes.  When I decide it’s time I’ll be move the site to D9, I want to make the transition smooth.  The site has been around a while.  It was originally a Drupal 4.7 site which has been upgraded to 5, 6, 7 is currently a D8 site.  

So enough preamble, back in the site’s D5 days I had 4 Drupal sites  that I merged into this site.  Since there wasn’t a MigrateAPI 12 years ago, I renumbered the NIDs inside the source sites and then copied the database records into the target site.  In case I needed to debug things, I left gaps between each of the site’s modes (I added 2000 to the NIDs from siteA, 4000 to the NIDs from siteB, etc) so I could easily tell where the nodes came from.  

This is worked and the merged site has been chugging along for over a decade.  I’ve come o a point where the system has ~3,000 nodes but the highest NID is approaching 11,000.  I don’t see any risk here, I’ve worked on much larger sites.  I’m in no danger of exceeding the max number of NIDs anytime soon. 

My question is, Is there any benefit to compressing the NIDs (probably by migration to fresh Drupal installation)?  I don’t see one, but I might be missing something that someone else can see.

As I was rebuilding this site, I added a neat design element that when you click the logo you get a modal box with links not just to home, but the top level of each of my landing pages (my resume, my Drupal consulting services my photography services and my blog).  The only downside to this approach was that when javascript is disabled there's no way back to the homepage of the site, 

I know I could use the noscript tag to display the logo linking back to the home page, but how do I hide the code with the javascipty goodness? Simply put I could set display:none on the elements that I wanted to hide in css and enable them in javascript.


See the Pen HTML visible only when Javascript is Enabled. by seanreiser (@seanreiser) on CodePen.