Tech-n-law-ogy

A Bot-Champion for Financially-Inclined Women!

Kudos to the Financial Times, that bastion of male writers covering male-dominated endeavors and industries! It recognized that there are women who might be interested in their articles! Also, it noted that not enough women experts were being leveraged in their articles! Finally, it recognized research that suggests that women might be put off by articles that quote heavily or exclusively from men! So, FT sourced the effort to correct this situation to a Bot – call it a “FemBot” if you will (my name, not theirs). This bot scans through the articles during the editing process to determine whether the sources named in the article are male or female. Editors are then alerted that they are falling short on including women in their pieces. Later versions might actually alert writers to their overly male tone as they type their articles.

FT isn’t stopping there. It is also examining the images it uses, and intends to press for more pictures of women. Because women are more likely to click through on pictures of women, then those only containing men. The Opinion Desk at the FT is also tracking behaviors, to note gender, ethnicity and geographical location – with the goal of supporting more female and minority voices in the publication.

The concept of bias baked into Artificial Intelligence systems from developers and data sets is an emerging issue and well-identified risk. However, FT appears to be embracing the bias in an effort to counteract it. Well done, FT!

 

Categories: Tech-n-law-ogy

Setting Up a Good Business Website

Operating a business can be difficult for many. You might face with several challenges that will at times force you to quit. This should be the last option because there are several measures you can take that will give you an edge over your competitors. You need to adapt to the changing trends in technology and implement them in your business.

One good thing you should do is come up with a good business website. It is a perfect way of marketing your business. A good business website will help attract more customers. Many will be eager to know the kind of products you are selling or services offered. Having one will also help build the reputation of your business. Most people will relate to the type of products or services you are offering.

The other thing you can implement is search engine optimization also known as SEO. It involves the use of several strategies that help get your site ranked top in most search engines. Your business website will top the list in search engines like Google, Yahoo, and Bing when you implement this strategy. It helps increase leads to your site to a certain extent. There are several things you should do to set up a website that is good for your business. They include:

Finding a Web Designer

You should look for a good web designer who will come up with a good website for your business. One thing you should look out for when hiring one is their level of expertise. You can judge one by having a look at some of their previous projects. Go for one with designs that match your preference.

Company Logo

Your business or company logo is another essential element for your website. You need to come up with one that is unique and stands out from the rest. Your designer can use different available software to come up with the best. It should be attractive and portray all the good qualities of your company.

Easy to Maneuver

Your designer should build a website that is easy to maneuver. People from various classes or walks of life will be visiting your site to have a look at the products or services you have to offer. Let them have an easy time going through it. Your web designer should also work on its appearance. One must ensure that it is attractive for everyone visiting the site.

Categories: Tech-n-law-ogy

Booking a Taxi Online

There are various means of transportation in most big cities. They have helped simplify movements from one place to another. Some of the conventional transport means include buses, bikes, trains, and taxis. Taxis remain a popular means of transport in most cities, and they are preferred by many.

One reason many prefer them is because of the level of comfort they get to enjoy when using them. Unlike other public transport means, you get to enjoy your own space in taxis since they are not crowded. Most of them are usually a personal vehicle. Taxis are also fast since there is no need to wait for one to fill up which is the case when using public means of transportation. Accessing them is also much more comfortable.

The improvements in technology have seen most taxi companies employ the use of taxi apps to simplify booking. Before this, one would have to call or look for taxis physically. Taxi apps are quite easy to use because you can trace the taxis that are nearby using their booking app.

When you trace one, you can book immediately by keying in your destination. Your driver will be there in a minute to pick you for your trip. Most apps will indicate the charges for your trip. Online taxi booking apps have helped make our lives easier. There are several benefits you get to enjoy when you use them. They include:

Saves You Time

You get to save a lot of time when you use different online taxi booking apps. Waiting for a taxi by the roadside can use up much of your time. It is also the same when it comes to walking to a specific spot to get a taxi. With the online taxi booking apps, you will have one coming to your current location within a short time.

It is Cheap

Online taxi companies are very cheap compared to the regular ones. Most of them usually charge per kilometer which is different in the regular taxis that place a random charge without considering the distance you have covered. You can opt for online taxi companies to save more money.

Easy to Use

The online taxi application has been designed in a manner which is easy to use for anyone. All you need to do is download the app and key in the required details. The booking process is easy because you will be required to have a stable internet connection to trace available taxis before booking one. Your driver will be available minutes after booking your trip.

Categories: Tech-n-law-ogy

My (somewhat) complete salary history as a software engineer

It’s 2018 and somehow women are still getting paid less than men, even in supposedly progressive industries like software.[1] Whether that be from companies offering women less than men for the same position, women being less likely to negotiate or less successful in negotiations, or any of the other myriad reasons, the results are still the same: women make less than men for the same job. That’s something that shouldn’t be happening in today’s world and it’s up to us (read: men) to step up and make things right. This is my attempt to do just that.

Why am I doing this?

Longtime followers know that I’ve been dealing with serious health issues for several years. Two and a half years ago I had to stop working to focus on my health and it will likely be a couple more years before I’m able to even consider working a full-time job again. The people responsible for my last compensation package have long since left that company. That puts me in a unique position where I am not beholden to any past employers, and any future employers are far enough into the future that the information I’m sharing here will be mostly useless by then. Plus, as a white man, I know I’m going to be able to negotiate for my salary without backlash[2] when I do start working again. As such, the information in this post is more valuable to others than it is to me.

As an aside, I’ve been annoyed that throughout my career I’ve been lectured many times to avoid discussing my compensation with colleagues. Usually it’s with the warning that, “not everyone is getting what you’re getting, and we don’t want to hurt feelings.” It took me a long time to realize that the “hurt feelings” they’re talking about come from an overall lack of transparency into the compensation process, and that simply explaining why people are compensated in certain ways would be a better solution than to hide all of the information from everyone. Yes, there will always be people who think they deserve to be making more but who don’t actually deserve it. That seems like a great way to communicate that they aren’t doing a good enough job and figure out ways to improve.

The bottom line is that nothing gets better unless people are willing to share information. And while I could just share my last salary, I don’t think that’s very useful, especially when compared with the variety of already-available sources of information online. No, to be useful, I felt like I would need to reveal my entire salary history so people can determine for themselves if they’re seeing enough improvement in their salaries over time.

Where did this data come from?

The data in this post comes from the following sources:

  1. My memory. Yes, memory is fallible, but there are some data points that are so important on an emotional level that they tend to stick in my brain. I’ll point those out.
  2. Offer letters. As my offer letters post-2006 were always emailed to me, I’ve been able to be 100% sure of those details. Prior to 2006, my offer letters were always mailed to me, and I have no record of those.

Where my memory fails me and I don’t have an offer letter, I’ve made an attempt to guess the salary range I had at the time.

The data

The table below contains all of my salary (and some other compensation history). I’m including what I believe to be data relevant to evaluating the compensation include the year I received the salary, the years of experience I had at the time (YOE), the starting and ending salary to take into account raises, and any signing bonus (Signing $) and stock options (Options) I might have received. Any amount with a question mark indicates that I’m guessing. I did not include any restricted stock units I might have received because I only ever received them at Yahoo as part of my initial offer.

Year YOE Company State Title Starting $ Ending $ Signing $ Options 2000 0 Radnet, Inc. MA Webmaster $48,000 $55,000 - ? 2001 0 Radnet, Inc. MA UI Developer $62,500 $62,500 - - 2001 0 MatrixOne, Inc. MA UI Designer/Developer $68,000? ? $2,000 ? 2003 3 MatrixOne, Inc. MA Senior Software Engineer ? $75,000? - - 2005 5 Vistaprint, Inc. MA Lead Software Engineer $82,000? $98,000 - 3,000 2006 6 Yahoo, Inc. CA Senior Front-end Engineer $115,000 ? $10,000 3,500 2008 8 Yahoo, Inc. CA Principal Front-end Engineer ? ? - - 2011 11 Yahoo, Inc. CA Presentation Architect ? $165,000? - - 2013 13 Box, Inc. CA Staff Software Engineer $175,000 ? $25,000 50,000 2014 14 Box, Inc. CA Principal Architect $208,000 $220,000 - - Job Details

The data alone doesn’t really tell the full story, so here are the details around each position. I’ve also included how I came to work at each company, as I think it’s important to recognize blind resume submissions from having contacts as a company.

Radnet (2000-2001)

My first job out of college was at a small startup in Wakefield, MA called Radnet, Inc. I got this job because the woman who used to babysit me as a child was running human resources at the company. My official title was webmaster, and I thought I would be coming in to run the company website. As it turned out, between the time they offered me the job and my starting day, they had hired someone to oversee both UI development and the website. As it turned out, I would never manage the website and instead would spend time making JavaScript components for the company’s web application.

I know that my starting salary was $48,000 (about $70,284 in 2018 dollars) because I was very excited about it. After spending summers working jobs that ranged from $2/hour to $6/hour, this seemed like an incredible amount of money to me. A few months in, they gave me a raise to $55,000 because I was doing a good job. Towards the end of 2000, they believed the company would be bought and so they changed my title to UI Developer and upped my salary to $62,500 with the belief that an acquirer would immediately fire the “webmaster” and ensuring I’d benefit from the acquisition.

As it turned out, the company never got bought and so it shutdown in January 2001. I never really saw much of the $62,500, and eight months after I had started my first job, I was unemployed.

Note: I did receive stock options for this position, but I don’t remember what they were. I didn’t really understand what stock options were at the time so that information never stuck in my brain.

MatrixOne (2001-2005)

When Radnet closed down, my manager ended up at MatrixOne and asked if I would like to join him. I had enjoyed working with him at Radnet so I accepted. It’s important to understand that this was during the dot-com crash and there weren’t a lot of tech jobs to be had in Massachusetts at the time. I considered myself lucky to have formed a good relationship that allowed me to find a new job fairly quickly after Radnet shut down.

I don’t remember all of the details but I’m pretty sure my starting salary was close to $68,000 ($96,814 in 2018 dollars). I’m also reasonably certain that I got a small signing bonus, maybe around $2,000, that my manager negotiated for me. I also received some stock options, but once again, I didn’t really understand what they were and so didn’t even register them as part of my compensation. It didn’t matter, though, because the company stock was never again as high as the day I joined. I was never able to exercise options, even when I got some repriced options later in my career there because the stock only ever went down. (Eventually the company would be bought out by a competitor.)

My salary didn’t improve much there because the company was in perpetually poor financial health. There was a salary freeze in place almost the entire time I was there. I survived four rounds of layoffs. I was eventually “promoted” to the position of Senior Software Engineer, but it was a promotion in title only. There was no increase in salary (because of the salary freeze) and no change in my responsibilities (because the organization was weird). It was just a pat on the back to say, “good job, please don’t leave.” Spoiler alert: I left as soon as I could.

Right before I left, I did get a salary increase to around $75,000. It wasn’t enough to make me want to stay.

Vistaprint (2005-2006)

I often refer to my position at Vistaprint as my first real software engineering job. It was the first time I applied for a software engineering job without having a connection at the company; I just sent my resume in to their email address. I reported into the engineering organization (as opposed to the design organization in my prior jobs), and I got what I considered to be a good offer. The company was pre-IPO, and I was excited to get 3,000 stock options. (By this time, I actually understood what stock options were.)

I don’t recall the starting salary but I suspect it was around $82,000 ($105,867 in 2018 dollars). I definitely recall the ending salary as $98,000 for a few reasons. First, I was complaining a lot about the boring project they had assigned me to so I expected that would eliminate me from any serious raise considerations. I was shocked to get a raise and even more shocked at the amount. Second, I was bummed they didn’t give me the extra $2,000 to make an even $100,000. Last, I was secretly interviewing with both Google and Yahoo, and upping my salary meant that I could use that number when it came time to talk compensation with them.

I was only at Vistaprint for a little over a year before deciding to move to California to work for Yahoo. Vistaprint did go public while I was there, but since I left after a year, I didn’t see much from those stock options.

Yahoo (2006-2011)

Yahoo’s initial offer was the best I had received up to that point. In addition to a $115,000 base salary ($143,833 in 2018 dollars), it included $10,000 signing bonus, 3,500 stock options, 1,500 RSUs, and relocation expenses. This was the first time I tried to negotiate for a higher starting salary and was summarily rejected. At least I tried.

I ended up at Yahoo through a circuitous route. I had heard that Yahoo was using my first book, Professional JavaScript for Web Developers, to teach JavaScript at the company. As such, I had an open invitation to stop by the campus if I was ever in the area. I had traveled to Mountain View to interview Google (they had found me through my second book, Professional Ajax) and so reached out to the folks at Yahoo to meet up. I didn’t realize that conversation would turn into an invitation to apply to work at Yahoo as well.

I don’t remember a lot of my pay details after I joined. Being at Yahoo for almost five years, I got several raises and two promotions, so my pay did keep increasing. All of that information was sent to my Yahoo corporate email address, and as such, I no longer have any of the documentation. That was intermixed with periods of layoffs and salary freezes. My initial stock options ended up worthless because the company stock price never again reached the level it was at when the options were priced. I would later get repriced stock options and more RSUs, but I don’t have specifics on that.

By the time I left, I suspect I was making around $165,000 based on how I felt about the offer from Box.

It’s worth noting that I left Yahoo to try to start a company with some friends and so didn’t have a regular salary for about 18 months.

Box (2013-2016)

My offer from Box was also strong. The starting salary of $175,000 ($189,415 in 2018 dollars) was more than enough to make me happy at the time, and the offer included 50,000 stock options. Box was pre-IPO so that high stock option allocation (which I was told was higher than what they usually gave people at my level) was a big consideration for me. I negotiated for a $25,000 signing bonus, as well.

As part of my consulting business, I would regularly give talks at different companies. I agreed to give a talk for free at Box because a friend worked there and mentioned that they were having trouble managing their growing JavaScript code base. I spoke with a few people after the talk, including the VP of engineering, and we decided to explore if working at Box was a good fit for the company and me. Through several more discussions, it seemed like a good opportunity to get back into the stability of a regular salary with some interesting problems to tackle.

My memory is a bit hazy around what happened between joining and the end of my time at Box as this was the period when my health was on a steep decline. I think I got one raise as a Staff Software Engineer about three months after I joined, and was informed of being promoted to Principal Architect six months after I joined (although I wouldn’t get the pay increase for another six months). I’m reasonably certain the promotion pay increase bumped me to $208,000. I recall clearly that I got one last raise to push me to $220,000 during 2014 because I had started working from home full time due to my health and I thought it was very nice of them to give me a raise regardless.

I left Box when I was no longer physically able to work from home.

Conclusion

In my sixteen year career, I averaged a pay increase of $10,000 per year, even when taking into account several years of salary freezes at MatrixOne and Yahoo. As such, I suspect I’d be making around $250,000 if I was working full time today.

It’s also important to understand that I never asked for a raise and only negotiated other details occassionally (as mentioned in the post). I never really felt comfortable with negotiations prior to working for myself, and generally was happy with the offers I received.

With the exception of my one year at Vistaprint (during which I was a grouchy pain in the ass), I was consistently reviewed as a top former at my position. I wasn’t put on any sort of improvement plan and most of my feedback had to do with improving interactions and communication with colleagues. And again, with the exception of Vistaprint (because…pain in the ass), I took the feedback to heart and worked to improve in those areas.

Being single and not having a family to support throughout my entire career meant that I had more options. I could afford to take salary that was lower than what I wanted or could get elsewhere, and I could also afford to walk away from valuable stock options (such as with Vistaprint) to find a job that was more fulfilling. I recognize that not everyone has that option, so I think it’s important to make my situation clear here.

I have two hopes from sharing this information. First, I hope that having this information will make it easier for women to understand how much they should be paid for similar work and just how their pay should be increasing throughout their career. Second, I hope that other men who are in a similarly independent position will also share their compensation history to benefit others.

We can all be better if we’re willing to share.

References
  1. By the Numbers: What pay inequality looks like for women in tech (forbes.com)
  2. Women Know When Negotiating Isn’t Worth It (theatlantic.com)
Categories: Tech-n-law-ogy

New Bill Proposed to Increase Access to Federal Court Records

Theoretically, certain documents are supposed to be freely accessible to the public, including documents contained in the dockets of the federal courts. Congress has permitted the imposition of fees for electronic access to this otherwise freely available documents, imposing a per page fee that, while not particularly excessive, can certainly add up. That access is accomplished through PACER – Public Access to Court Electronic Records. 

The fees, their use, and any “profit” realized via the system, have been the subject of public debate and litigation. Suits include class actions and are premised on overcharges, proper application of collected fees and failure to abide by certain laws, such as the E-Government Act of 2002. While private companies, such as Thompson Reuters and LexisNexis offer paid access with extra bells and whistles, the debate fundamentally centers on what constitutes public “access” to public documents in this day and age.

Recently, in early September, Rep. Doug Collins (R-Ga.) has introduced a bill to increase transparency and access to these federal court documents. H.R. 6714, the Electronic Court Records Reform Act, seeks to open up PACER to users for free. It requires documents to be added within five days after filed with the court, in a text-searchable and machine-readable format. It also mandates updates to the woefully cludgy system and interface, including improvements to the search function. The bill also seeks to consolidate the Case Management/Electronic Case Files (CM/ECF) system. While this system was intended to improve efficiency within the judicial system, it is broken into different systems in different courts, which further obstructs locating records and documents. The Act would unify these disconnected systems under the Administrative Office of the U.S. Courts. Finally, the Act will permit fees to be charged to States that wish to opt into the CS/ECF system.

Who knows if this Bill will pass and the moneymaker that is PACER forever opened up to the masses through free access. It will be interesting to see how this Bill fares and, if it does pass, what it ultimately will look like. You can take a look at the current version of the bill text here.

 

Categories: Tech-n-law-ogy

Gauging Alertness – There’s an App for That

Some of my best work is done when I am alert. No, really – early mornings are my most productive time, when I am fresh from a full night’s sleep. No doubt, the large cup of coffee helps. But when I am alert, I can plow through tasks, no matter how mundane, and feel fully engaged with the project.

But then, afternoon hits. About 2:00 pm, I start to feel my attention wander and my productivity decline. Sure, I could grab some sugar or caffeine, but those measures bring along with them some undesirable side effects.

What if I could be “alerted” to this gradual decline in focus, so that I could identify my patterns (other than recognizing that 2:00 is a tough time for me) and proactively schedule my work in ways that maximize my focus and efficiencies? Traditional alertness devices and measuring methods are fairly cumbersome, obtrusive and time-consuming – what if I could employ tracking in a way that is unobtrusive and passive and part of my regular regime?

Vincent W.-S. Tseng, Saeed Abdullah, Jean Costa, Tanzeem Choudhury, researchers at Cornell University, think they may have found a “healthier” answer to this potential problem. A smartphone app, appropriately named “AlertnessScanner”, Android only, can measure your alertness by taking photographic bursts using the front facing camera of your pupils when you look at your smartphone. In this manner, alertness can be measured continuously and in a way that does not meaningfully interfere with your regularly scheduled activities.

Image taken from http://pac.cs.cornell.edu/pubs/AlertnessScanner.pdf

Why look at the eyes? The Cornell researchers note that the pupils of alert people are more dilated to increase information intake, compliments of the sympathetic nervous system. Conversely, when people are drowsy, the parasympathetic nervous system causes pupils to contract. The app, developed for research purposes only at this point, measures pupil dilation through its imaging and analysis tools. Images are taken upon sleep/unlock or at a user’s prompting. The app analysis employs a “pupil to iris” ratio in order to account for different focal distance each time a person looks at their phone. The app also allows the user to view the images and confirm the image is of sufficient quality before saving. The app also offers a “sleep journal” to record passive data on duration of the previous night’s sleep.

When you can track your alertness on the go, you can implement breaks or schedule work according to difficulty or focus-needs. I can see this being really important for some jobs – surgeons and heavy machine operators come to mind. But even lawyers can benefit from knowing when to schedule the drafting of a Supreme Court brief and when to schedule reviewing the news alerts.

While the results of the study look promising, the app is not yet in development. The researchers did identify some potential shortcomings, such as issues with controlling ambient light and the necessary resolution for the front facing camera to record usable results. However, early experimentation looks promising. The idea of better body analytics is a popular one right now – perhaps the next iteration of the Apple Watch can incorporate pupil dilation along with other metrics to zap you when you need it most.

 

Categories: Tech-n-law-ogy

Swapping Bots for Lawyers Via App: DoNotPay

I know it has been a fearfully long while since I last posted here. I forgive anyone and everyone from abandoning the empty, echoing halls of AdvocatesStudio for greener and more fertile and more updated tech blog pastures. Yet, here I am, writing again, prompted by the thought that suing someone may be as simple as downloading an app. DoNotPay, a robot lawyer Chatbot app, is now promising to help people file suits in small claims court, no JD required. And, because I like the cheap and free around here, DoNotPay currently is free for users. That is a lot cheaper than the hourly rate charged by the average lawyer.

Developer Josh Browder, now barely drinking age here in the US, created DoNotPay as a means of automating the process of challenging parking tickets, mainly inspired by his own excessive collection of tickets generated shortly after receiving his drivers’ license. The ChatBot – a conversational interface that prompts a user to provide information that can then be leveraged by the AI to provide answers or actions – allowed users to select one of several defenses to the ticket, enter details and send an appeal generated by the app to the appropriate legal authority. Browder taught himself to code at the age of 12, and his efforts certainly haven’t been wasted – the first version of the bot in 2015 reportedly saved UK drivers approximately 2 million pounds in two months time. Buoyed by his early success, Browder has allegedly claimed his app may “take down” the legal profession, which undoubtedly will be applauded by a couple of people.

Following on the parking ticket win, Josh added new beta functionality to the app in 2017 on the heels of the massive Equifax data breach –  he apparently also was swept up into the breach (notice a trend here?). DoNotPay offered the ability to sue Equifax in small claims courts throughout the U.S. up to the small claims jurisdictional limit, ranging from $2,500 to $25,000. The new functionality basically assisted the user in preparing the forms necessary for the small claims action; you still had to serve the Complaint and attend the hearing. After entering your name and address in the app and the app generated the necessary papers to institute a small claims action in a PDF format that could be printed and filed. Providing any assistance in the process, though, is a benefit to users unfamiliar with local small claims practice who might otherwise not bother to navigate the legal maze.  And, as found with the parking tickets, users reported some success using the app to secure awards from Equifax.

Within the past week, Browder has again tweaked the app, now permitting users to create documents to sue anyone in small claims court. And, the Bot is now available via mobile application – previously, the tool was strictly web based. An Android app is coming, Browder promises. There are additional new features, and this might be where Browder monetizes – users can find deals on fast food by filling out surveys and deals on prescription and over-the-counter drugs, make appointments at the California Department of Motor Vehicles and check on class action settlement eligibility.

The app can be used to help fight bank fees and dispute transactions, secure refunds from companies like Uber, and fix credit reports. Like the beta version, the bot asks for a name and address, claim size (to see if it is within the jurisdictional limit of the applicable state), and then generates a demand letter, creates the filing documents, offers information on how to serve the suit, and even generates suggested scripts and questions that users can leverage at the hearing.

The new app doesn’t stop there – DoNotPay also recently acquired Visabot to assist in the application of green cards and other visa filings. While, Visabot was a pay app for some of its services, Browder is offering the former services, like all DoNotPay services, for free.

Does DoNotPay violate state laws on the unauthorized practice of law? Good question and one that is not yet resolved. My thought is that, if the information DoNotPay provides is targeted information that is freely accessible in the public forum, albeit in in a guided interface that helps users cut through the swathes of irrelevant, confusing or downright unhelpful information, perhaps that is not the same as providing legal advice. However, as I haven’t used the app myself yet, I cannot comment on whether any of the tools cross the line. I also cannot comment on the accuracy of the information provided by the app. Browder certainly maintains that he has been addressing concerns and making updates to improve information and to ensure compliance with applicable laws.

Browder also maintains that the information users provide to the app is protected –  per Do Not Pay’s privacy policy, user data is protected with 256-bit encryption and there is no purported access to PII or case information.

Some may cynically claim that apps like this make an already litigious system worse. However, the fact remains that those who are most likely to use such an app are most likely the under-served segments of legal services in our society. Perhaps opening those doors a little wider may encourage some positive behaviors on the part of institutions that have benefited from that lack of access. Particularly in the area of immigration these days, such assistance, in any form, may be vital and life altering.

It is not clear how long the app will remain free. For now, Browder is seed funded with $1.1 million from investors and micro donations from customers. Browder’s stated intentions is that basic legal services will remain free, but inevitably, he may need to add charges for some services in order to keep the app going.

You can download the app yourself on the App Store – feel free to report back on your experience. Would love to know how our new Robot overlords handle the complexities of small claims court.

Categories: Tech-n-law-ogy

Extracting command line arguments from Node.js using destructuring

If you’ve worked on a Node.js command-line program, you were probably faced with the extraction of command line arguments. Node.js provides all command line arguments in the process.argv array. However, the contents of the array aren’t what you might expect.

What’s in process.argv?

The first two items in process.argv are:

  1. The path to the executable running the JavaScript file
  2. The path of the JavaScript file being executed

So the first command line argument is the third item in the array. For example, consider the following command that runs a Node.js program:

node index.js --watch

The contents of process.argv will look something like this (depending on your system and file root)

  1. /usr/bin/node
  2. /home/nzakas/projects/example/index.js
  3. --watch

While the first two items in the array might be useful to some, chances are that you’re only interested in --watch. Fortunately, you can use JavaScript destructuring to pick out just the command line arguments you want.

Using destructuring to extract arguments

Using JavaScript destructuring, you can separate the process.argv array into pieces and only use what you need. For example, this code separates the array into its three parts:

const [ bin, sourcePath, ...args ] = process.argv; console.log(args[0]); // "--watch"

Here, the bin variable receives the Node.js executable path, sourcePath receives the JavaScript filepath, and the rest element args is an array containing all of the remaining command line arguments.

You can take this one step further and just omit bin and sourcePath if you have no use for them:

const [ , , ...args ] = process.argv; console.log(args[0]); // "--watch"

The two commas at the beginning of the pattern indicate that you’d like to skip over the first two items in the array and store the remaining items in the args array. You can then further process args to determine what to do next.

Conclusion

While the process.argv array is a bit confusing at first, you can easily slice off just the information you’re interested in using JavaScript destructuring. Destructuring assignment is ideally suited for extracting just the information you want from an array.

Categories: Tech-n-law-ogy

Detecting new posts with Jekyll and Netlify

This blog has long featured the ability to subscribe by email, so you could get an email notification when a new post was published. I’ve used various services over the years to achieve this, first with FeedBurner and later with Zapier. As I’m a do-it-yourself kind of person, I never liked relying on external services to determine when a new post appeared on my blog. I figured I would never be able to build my own system When I moved this blog from the dynamic Wordpress to the static Jekyll[1]. Still, it seemed like a waste to have a service keep polling an RSS feed to see if it changed. After all, I know when my blog is being built…why can I just check for a new post then? It took me a little while and several iterations but eventually I figured out a way.

Step 1: Creating a data source

Most services that check for new blog posts use RSS feeds to do so. I didn’t want to use the RSS feed for two reasons:

  1. Parsing RSS is a pain
  2. Bandwidth concerns - My RSS feed is quite large because I include full post content

So I decided to create a small JSON file containing just the information I was interested in. This file lives at /feeds/firstpost.json and contains metadata related to just the most recent post on the blog. Here’s the Liquid template:

--- layout: null --- { {% assign post = site.posts.first %} "id": "{{ post.url | absolute_url | sha1 }}", "title": {{ post.title | jsonify }}, "date_published": "{{ post.date | date_to_xmlschema }}", "summary": {{ post.content | strip_html | truncatewords: 55 | jsonify }}, "url": "{{ post.url | absolute_url }}" }

This file includes just the information I need for any new blog post notification, which might include emails, tweets, Slack messages, etc. I’m using the absolute URL for the blog post as a unique identifier, but you can use anything is sufficiently unique. (You can always add or remove any data you may need if this dataset doesn’t fit your purposes.)

Credit: This format is loosely based on JSON Feed[2] and the code is partially taken from Alexandre Vallières-Lagacé’s Jekyll JSON Feed implementation[3].

Step 2: Deploy the data source

This is very important: the data source must already be live in order for the detectiong script to work correctly. So before going on to the next step, deploy an update to your site.

Step 3: Create the new post detection script

The new post detection script checks the live data source against the one on disk after running jekyll build. If the id of the most recent post is different between the live and local versions of firstpost.json, then there is a new post. Here’s the detection script:

"use strict"; const fs = require("fs"); const fetch = require("node-fetch"); (async () => { // fetch the live data source const response = await fetch("https://humanwhocodes.com/feeds/firstpost.json"); if (response.status !== 200) { throw new Error("Invalid response status: " + response.status); } const currentFirstPost = await response.json(); console.log("Current first post is ", currentFirstPost.id); // read the locally built version of the data source const newFirstPost = JSON.parse(fs.readFileSync("./_site/feeds/firstpost.json", { encoding: "utf8" })); console.log("New first post is ", newFirstPost.id); // compare the two if (currentFirstPost.id !== newFirstPost.id) { console.log("New post detected!"); // do something for new posts } })();

This script uses node-fetch to retrieve the live data source and then compares it to the local data source. If the id is different, it outputs a message. How you respond to a new post is up to you. Some options include:

  • Send an email notification
  • Post a tweet
  • Post a Slack message
  • Emit an event to AWS CloudWatch (this is what I do)

The most important part of the script is that it needs to be executed after jekyll build and before the site is deployed.

Step 4: Updating Netlify configuration

One of the advantages that Netlify[4] has over GitHub pages for Jekyll sites is the ability to modify the build command. The easiest way to do that is by using a netlify.toml file[5] in the root of your site. In that file, you can modify the build command. Here’s an example:

[build] command = "jekyll build && node _tools/newpostcheck.js" publish = "_site"

The command entry specifies the build command while publish indicates the directory into which the built web site files should be placed (most Jekyll builds use _site, and this is Netlify’s default). The command should be updated to run the new post detection script after jekyll build.

Note: You must have a package.json file in the root of your repository to have Netlify install Node.js and any dependencies (such as node-fetch) automatically.

Step 5: Deploy to Netlify

The last step is to deploy the changes discussed in this post. When Netlify builds your site, the new post detection script will be executed and you will be able to respond accordingly. It’s a good idea to run the script once with a new post and observe the logs just to make sure it’s working correctly before hooking up notifications.

Conclusion

The advantages of using a static site generator (such as Jekyll) sometimes means giving up a big of convenience as it relates to changes on your site. While dynamic solutions (such as WordPress) might offer more hooks, static solutions are often capable of similar functionality. New blog post notifications are important for most blogs and being able to achieve them using Jekyll is one more vote in favor of static sites.

While this post focuses on Jekyll and Netlify, the same approach should work for any static site generator and any deployment system that allows you to modify the build command.

References
  1. Jekyll (jekyllrb.com)
  2. JSON Feed (jsonfeed.org)
  3. jekyll-json-feed (github.com)
  4. Netlify (netlify.com)
  5. The netlify.toml File (netlify.com)
Categories: Tech-n-law-ogy
Subscribe to www.dgbutterworth.com aggregator - Tech-n-law-ogy