Categories
Development Devops Software

(almost) Native BASH semver CHECK

Recently I needed a simple way to compare semver strings in a BASH script and was unable to find a solution that I liked so I wrote one. Why almost-native BASH? Leveraging the common available sort utility avoids re-implementing the version sort logic, reduces potential bugs and size of the function to a simple 3 lines vs 30+ lines for most BASH-native solutions.

This is working well in environments I manage. If you notice any bugs or issues please comment.

Function returns a standard -1, 0, 1 response. Basic unit tests included to validate functionality and provide example usage.

Enjoy!

#!/usr/bin/env bash

function semver_check {
  # sort low->high then pick the last one (highest)
  local HV; HV=$(echo -e "$1\n$2" |sort -V |tail -1)
  # They're not the same and $1 is not the high version = -1
  [[ "$1" != "$2" && "$1" != "$HV" ]] && echo -1 && return
  # 0 = they're the same; 1 = not the same
  [[ "$1" == "$2" ]]; echo $?
}

#
# TESTS
#

LV="1.2.3"
HV="2.3.4"

ARGS=("Test A: Ver1 == Ver2" "$LV" "$LV")
echo -n "${ARGS[*]} "
[[ $(semver_check "${ARGS[1]}" "${ARGS[2]}") -eq 0 ]] && echo "PASS" || echo "FAIL"

ARGS=("Test B: Ver1 < Ver2" "$LV" "$HV")
echo -n "${ARGS[*]} "
[[ $(semver_check "${ARGS[1]}" "${ARGS[2]}") -lt 0 ]] && echo "PASS" || echo "FAIL"

ARGS=("Test C: Ver1 > Ver2" "$HV" "$LV")
echo -n "${ARGS[*]} "
[[ $(semver_check "${ARGS[1]}" "${ARGS[2]}") -gt 0 ]] && echo "PASS" || echo "FAIL"

ARGS=("Test D: Ver1 >= Ver2" "$HV" "$LV")
echo -n "${ARGS[*]} "
RESULT="PASS"
[[ $(semver_check "${ARGS[1]}" "${ARGS[2]}") -ge 0 ]] || RESULT="FAIL"
[[ $(semver_check "${ARGS[1]}" "${ARGS[1]}") -ge 0 ]] || RESULT="FAIL"
echo ${RESULT}

ARGS=("Test E: Ver1 <= Ver2" "$LV" "$HV")
echo -n "${ARGS[*]} "
RESULT="PASS"
[[ $(semver_check "${ARGS[1]}" "${ARGS[2]}") -le 0 ]] || RESULT="FAIL"
[[ $(semver_check "${ARGS[1]}" "${ARGS[1]}") -le 0 ]] || RESULT="FAIL"
echo ${RESULT}

Addendum: The sort binary dependency must support natural version sort option (-V, --version-sort) This option is available on GNU coreutils 7+ and is also available on Mac OS 10.14+

Categories
Development NodeJS Software

The best way to do module.exports in node

Recently had a question asked regarding the right way to export an object in node. Consider the following code

class Foo {
    constructor() {
        this.foo = 'foo'
    }
}

module.exports = function() {
    return new Foo()
}

We have a class that exports a function which returns a shiny new Foo object.

Great! Then someone asks. Why bother wrapping it in a function when you can just return the object? Let’s see.

class Foo {
    constructor() {
        this.foo = 'foo'
    }
}
module.exports = new Foo()

Basically the same thing right? Not so fast! Yes, the function is gone and now it is returning that shiny new Foo object directly. However this second version will only create one Foo object. You can require the module anywhere in your code and it will always return the same Foo object.

Take a look at this REPL (Read-Eval-Print-Loop)

> foo=require('./foo')()
Foo { foo: 'foo' }
> foo.foo='bar'
'bar'
> bar=require('./foo')()
Foo { foo: 'foo' }
> foo
Foo { foo: 'bar' }

As you can see above this is the foo module from our first example. You can tell by the trailing parens () on the require statement. That is executing the function that is exported by the module.

We assigned the Foo object to a variable named foo. You can see that the property ‘foo’ of object foo is set to a string with the value ‘foo’.

Next we set the property foo to a string ‘bar’ and then we require foo again. This time we assign it the variable bar and we see that it is in fact a brand new Foo object because bar.foo equals ‘foo’ but foo.foo equals ‘bar’

(wishing I’d used different variables at this point, oh well too late for that now) Still with me? Great!

So everything looks good… well, that is if you want a brand new Foo every time you require the module, but what about that second example. Let’s have a look.

> foo=require('./foo')
Foo { foo: 'foo' }
> bar=require('./foo')
Foo { foo: 'foo' }
> bar.foo='bar'
'bar'
> foo
Foo { foo: 'bar' }
> baz=require('./foo')
Foo { foo: 'bar' }

Here we require the same foo module twice assign it once to foo and once to bar. They are both the same, as you would expect however this time when we set bar.foo to the string value ‘bar’ we can actually see that we only have a single object because when we check our foo object the foo property is now a string with the value ‘bar’.

Just to make the point extra clear we require foo a third time and assign it to baz. Again you see that baz.foo equals ‘bar’.

This is what is known as a Singleton, the module only creates one object the first time it is used and then it hands that same object out every time it is required again.

This is a very important concept to understand and remember. One way isn’t better than another. The method you choose depends on what you are trying to accomplish. For example if you have a logging module that you want to configure once but use everywhere then you’ll want to use the second example but if you’re creating a catalog of items each with their own distinct properties you’ll want to stick with the first example.

Categories
Development Devops Software SysAdmin

Inline parsing JSON objects in 3 languages

JavaScript Object Notation commonly known as JSON, is a convenient format for many reasons and it is no wonder that many APIs and web services today support returning (in some cases exclusively) JSON.  Many times I have had a need to parse values from large JSON Objects in a shell script environment or a command line shell, and use them as inputs to another program.  This has put me on the look out for easy ways to handle inline parsing of JSON Objects using languages commonly available on modern Linux systems.

Below are examples in three languages (Node JS, Python and Perl) to accomplish this task.

Example JSON Object:

{"product": {"builds": {"1234": "IT WORKS!"}, "default": "1234"}}

Inline Node JS:

node -e "s=''; i=process.stdin; i.on('data', function(d) { s += d }); i.on('end', function() { j=JSON.parse(s).product; console.log(j.builds[j.default]) })"

Inline Python:

python -c 'import sys, json; j=json.load(sys.stdin)["product"]; print j["builds"][j["default"]]'

Inline Perl:

perl -e 'use JSON; local $/; $d=decode_json(<>)->{product}; print $d->{builds}->{$d->{default}}'
  • The JSON module is required in the above example. If not already available it can easily be installed via CPAN with:
sudo perl -MCPAN -e 'install JSON'


Full example for Python:

echo '{"product": {"builds": {"1234": "IT WORKS!"}, "default": "1234"}}' |python -c 'import sys, json; j=json.load(sys.stdin)["product"]; print j["builds"][j["default"]]'

Categories
Devops Software SysAdmin Technology

vCloud Director 1.0 Launched

Excited to share that the project I have been working on for the past 9 months, vCloud Director has shipped!

http://www.vmware.com/company/news/releases/vmworld-infrastructure

Categories
Software

SureTrak 3.0 Printing Issues

Recently a client called me in to investigate an issue they were having with SureTrak 3.0 by Primavera. The program would crash whenever they tried to print a schedule. This is legacy software and no longer being updated or supported. Primavera product line was recently bought by Oracle but backward compatability is not guaranteed and this particular client frequently works with old data files so backward compatibility is an absolute must.

The Suretrak software states that it is compatible with Windows 98 and ME but not XP. My initial review of the issue indicated a software incompatibility with XP however the software had been working fine and the issue had appeared out of nowhere and was causing Suretrak to crash on all systems when attempting to print. The solution was not apparent but after a number of tests I was able to conclusively determine that the problem was due to path length limits from the MS DOS era.

(TLDR; Shorten your path and/or filename so it does not exceed the maximum length of 79 characters.)

It has been nearly 20 years since I was developing software for MS DOS so I needed to refresh my memory on the subject. DOS paths are limited to 64 characters; 66 if you count the storage designation (drive letter and colon) I’m sure more seasoned DOS coders will remember the magic number as being 80 characters for allocating string buffers. The following helpful excerpt was taken from: http://www.datman.com/tbul/dmtb_018.htm

The longest filename string including the drive letter and colon is
79 characters (many programmers remember the magic number to be 80
 which includes the terminating "nul" character at the end).

Another limit many users overlook is the 64-character limit on the
pathname.  In this context, the 64-character limit starts with the
first backslash which represents the root directory.  If you add
the common volume specifier (a drive letter plus a colon), the
maximum length for the pathname will be 66.   Now, the longest name
(the "lastname") in the so-called 8.3 DOS naming convention is 12
characters.  Therefore, the total is

   66 + 1 + 12 = 79

The "+ 1" in the middle is for the last backslash.  Adding the
terminating nul character at the end will make up the magic number
of 80 bytes which most people remember.

The better way to remember the limits is to remember that the
longest subdirectory name allowed in DOS is 66 characters.  The
80-character limit commonly sited should be only for programmers
who need to allocate a buffer of 80 bytes.  If you remember the
66-character limit, then the 79 character limit can be derived.

That says it all right there. Needless to say identifying the issue was the hard part, the solution was easy. Keep your path lengths reasonably short and you will never have to deal with this elusive and obscure bug!