Tuesday, June 27, 2023

AI-assisted research using ChatGPT

Brace yourselves! My recent journey with ChatGPT 3.5, and now 4.0, has been nothing short of a roller coaster ride. Now, I'm incredibly excited to share the incredible highlights of this adventure with you!

Once upon a time, ChatGPT was just another tech term that had fluttered by my ears. I mean, who needs another AI chatbot when you've got kids to keep you on your toes, right? But then, a respected colleague suggested an interesting angle - using ChatGPT for Bible study. Intrigued, I decided to plunge in and explore, and boy, was it a revelation!

From the moment I started interacting with ChatGPT, it was as if I was engaging in conversation with a fellow human being! No complicated tech lingo, no new syntax - just plain, simple English. I could ask anything - basic or intricate, generic or specific - and always be met with an answer that mirrored my query.

Imagine the joy of receiving responses tailor-made to your curiosity! You pose a specific question, you get a precise answer. And the best part? No risk of getting side-tracked or lost in translation. It's like having a super knowledgeable friend who can hold a conversation exactly at your desired pace and depth!

Now, let's get down to the reasons why ChatGPT is the most refreshing and exhilarating tech encounter I've had.

  1. Relevance: It provides responses that precisely match your query. If an answer feels incomplete, keep the questions rolling until you're satisfied!
  2. Well-rounded: It strives to deliver unbiased responses, full of varied details and even alerts you to contentious topics.
  3. Speed: Even the free version responds faster than the average human conversation - if it can keep up with your reading speed, then you're all set!
  4. Easily Digestible: It crafts responses that are effortlessly understandable. You can even ask it to simplify to a level a 5-year-old can grasp.
  5. Perfectly Sized Responses: Its responses are just the right size for research, quenching your immediate curiosity while leaving you with tantalizing questions to keep exploring.

So here's the deal: ChatGPT has supercharged my productivity, both at work and home. A quick question here, a swift response there, and I'm off to the races! Plus, it's turned Bible study into an even more enriching experience.

Let me tell you, this brilliant tool isn't without its bloopers! There have been quite a few instances where it didn't hit the mark. But here's the beauty of it - if you're navigating a digital workflow and hit a snag, you simply let ChatGPT know it made a boo-boo. And guess what? It leaps right into action, refining its responses. It's like having a persistent problem-solving buddy who doesn't mind rolling up its sleeves a couple of times to get things right. How cool is that?

ChatGPT has truly made a splash in my world!


Try this prompt out:

Exegete John 3:16.

John 3:16 is one of the most well-known verses in the Bible, and it reads: "For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life."

Let's break down this profound verse:

  1. "For God so loved the world": The verse starts with the assertion of God's immense love for the world. This refers to all of humanity – every person in every nation. The use of "so" emphasizes the depth and extent of God's love.

  2. "that he gave his one and only Son,": God's love is demonstrated in the act of giving. The magnitude of the gift – His "one and only Son" – underscores the magnitude of His love. This refers to the incarnation of Jesus Christ, God's Son, who came into the world as a human being.

  3. "that whoever believes in him": The word "whoever" makes clear that this invitation is open to all. The only condition set is belief in Jesus. Belief (faith) here implies trust, acceptance, and commitment to follow Jesus.

  4. "shall not perish but have eternal life.": The consequence of this belief is twofold: first, believers will not perish – this is a spiritual death, a final separation from God. Secondly, they will have eternal life. In the Christian understanding, eternal life isn't just unending life, but a quality of life characterized by love, peace, and fellowship with God. This life begins now and continues beyond physical death into eternity.

John 3:16 is the essence of the gospel message: God, out of His vast love, gave His Son Jesus Christ to the world, so that whoever places their faith in Jesus will be saved from spiritual death and receive eternal life.

Friday, March 12, 2021

How to test a NodeJS module and swap out its private dependent class with a fake

 TL;DR

  • Use rewiremock
  • Create a FakeDep class with which to replace the real one
  • rewiremock('../../../real/code/my-real-class').with(FakeDep);
  • rewiremock.enable();

Details

When you're unit-testing a module, you want to isolate your testing to only that one piece.  You presumably have other unit tests for the dependencies you have written.

All day today I have been trying to swap out a private class dependency inside of a SUT.  The SUT creates a new instance of the class returned.

I found examples on how to swap out dependencies this way and that--but never a complete example on how to swap out a module that returns a class that is later newed up by the SUT.  I know there are others out there looking for this solution because I saw their questions that have gone unanswered--at least unanswered completely.

I tried using proxyquire, but I just kept spinning my wheels.  The SUT kept newing up the actual dependency and not my fake.  At the end of it, I submitted an issue asking them to add an example for my use case and moved on.

Anyway, here's what I ended up with.  
Please excuse the nature of the code, as it has been sanitized.

// ./real/code/my-sut.js
const MyRealPrivateDependency = require('./my-real-class'); // Not shown
const dep = new MyRealPrivateDependency('param');
// We want dep to be replaced with FakeDep

async function doSomethingCool (filters) {
await dep.doSomethingCool(filters, true);
}

exports.doSomethingCool = doSomethingCool;

// ./test/A/sut-tests.js
const sinon = require('sinon');
const chai = require('chai');
const sinonChai = require('sinon-chai');
chai.use(sinonChai);
const expect = chai.expect;

describe('A', () => {
describe('B', () => {
describe('SUT', () => {
it('should doSomethingCool', async() => {
// Arrange
const rewiremock = require('rewiremock/node');
const stub = sinon.stub().resolves();
class FakeDep {
async iterate() {
return await stub(...arguments);
}
}

rewiremock('../../../real/code/my-real-class').with(FakeDep);
rewiremock.enable();
const sut = require('../../../real/code/my-sut');
const filters = {};
// Act
await sut.doSomethingCool(filters);
// Assert
expect(stub).to.have.been.calledOnce;
expect(stub).to.have.been.calledWith(filters, true);
});
});
});
});


Alternatively, instead of creating a new fake class, you can require the dependency, swap out the prototype method you want to change with the stub, and call it a day like this:

const FakeDep = require('../../../lambdas/maintenance/my-real-class');
FakeDep.prototype.doSomethingCool = stub;
rewiremock('../../../real/code/my-real-class').with(FakeDep);

Enjoy!

Thursday, February 18, 2021

AWS CloudFormation Multi-Region Failover using DynamoDB v2019 and Lambda

TL;DR

Rant

Scouring the internet for a solution for multi-region failover using a DynamoDB Global Table (v2019) in CloudFormation resulted in basically nothing.  Sure, I found some solutions using an antiquated version of DynamoDB Global Tables--and this sent me down a rabbit trail until I realized that it was out-of-date.

One would think that AWS would have at least an example laying around somewhere so you aren't left trying to reinvent the wheel.

I tried posting a solution on Stack Overflow, but got bounced because it references off-site resources (GitHub) and this example is way too complicated to post in a single answer on SO.  I'm starting to see why it's so hard to find good resources on common AWS questions.

Solution

After lots of trial and error, I've come up with a solution and posted it on GitHub here.

I'm new to applying licenses to code, so let me know if I did it wrong or should've chosen a different license.

The repository is a collection of common templates that will likely fill out over the years.

Here were our requirements:

  • CloudFormation
  • NodeJS
  • Multi-Region failover
  • Serverless (Lambda)
  • Low-intensity check to see if we're running in the active region (we went with an environment variable that the Lambda could check)
  • No manual creation of resources in the AWS Console (production requirement)
The basic setup is that we have a global config table (DynamoDB) that streams edits to all regions we're operating in.  We look to see what the new active region is, and then update all of our resources to accommodate the new state.  In the case of this example, it only updates the Lambda environment variable REGION_STATUS to either active or inactive.
In our final implementation, we're enabling and disabling alarms, schedules, events, etc.  But all of those examples come with a lot of extra complexity in the template and I decided to leave them out to have a solution that's as simple as possible, yet introduces enough complexity to be useful.

After the stacks are deployed to the various regions, just add/edit the active-region key with the value of the region you want and Bob's your uncle.  After about seven seconds, you should be updated and failed over.


Monday, March 9, 2020

How to share code between AWS Lambda functions using Serverless Application Model in Node.js

I'm writing this post because it turned out to be a lot more difficult that I think it should've been.
I mean, who doesn't have code that they want to share between lambda functions?
And who doesn't want to debug those functions or business logic inside of those functions quickly--that doesn't require firing up the whole lambda into the AWS environment?

We wanted to have a utility package that wraps all of the details when accessing our architecture.  We wanted the lambda functions to be as readable and minimal as possible.

I kid you not--it took three of us three to four days to figure out a reasonable solution!

Here were our requirements:
  • Be able to debug locally
  • Be able to run unit tests on business logic (without having to be ran in an AWS sandbox)
  • Be able to run tests in sam local start-api
  • Be able to debug the code in the container via sam local invoke
  • sam build works
  • sam deploy works
  • Runs in AWS Lambda in the cloud
The first idea I had was to add a file reference to the package.json file inside the lambda folders and let npm pull those dependencies in.  It worked perfectly--the first time.  The second time I did a sam build, it DELETED all of the shared code!  WHY?  Apparently, when an .aws-sam folder exists, it performs an npm cleanup of some sort and that appears to delete the shared code.  That was sad because that was a super clean implementation and I was reluctant to walk away from it.

Anyway, one of my coworkers went down another path and required dependencies using a ternary operator.  If we're running in AWS, then require the full path where the layer is installed.  Otherwise, require the relative path to the shared code.

We created a github repo to help others get an easy start, here: https://github.com/blmille1/aws-sam-layers-template.


Please refer to them for a more in-depth explanation.

Be blessed!

Friday, December 14, 2018

Extending Newtonsoft JSON JToken to output only the text values

I recently needed to take a big JSON payload and dump it to text just like you can do with an XML Document using InnerText/OuterText.

At first glance, it looked complicated and it just felt like something that should have come out-of-the-box.

Well, here is the Extension Method:


    public static class ExtensionMethods {
        public static string OuterText(this JToken token) {
            var sw = new StringWriter();
            token.OuterText(sw);
            return sw.ToString();
        }

        public static void OuterText(this JToken token, StringWriter sw) {
            switch (token.Type) {
                case JTokenType.String:
                    sw.WriteLine(token.ToString());
                    break;
                case JTokenType.Property:
                case JTokenType.Array:
                case JTokenType.Object:
                    foreach (var item in token.Values()) {
                        OuterText(item, sw);
                    }
                    break;
            }
        }
    }


Here's an easy way of using it:

       public void Test() {
            var jsonString = "{ \"Title\":\"Extending Newtonsoft JSON JToken to output only the text values\", \"OtherStuff\":[ {\"text\": \"I sure wish I had a way of converting a JSON Object to text, but without it being in JSON format...\"}, \"Wait a minute, ... I CAN!\", { \"objectContainingText\": { \"numberProp\": 1, \"stringProp\": \"Hooray for extension methods!\" } } ] }";
            var jsonToken = JsonConvert.DeserializeObject<JToken>(jsonString);
            var jsonText = jsonToken.OuterText();
       }

Here's the result:
Extending Newtonsoft JSON JToken to output only the text values
I sure wish I had a way of converting a JSON Object to text, but without it being in JSON format...
Wait a minute, ... I CAN!
Hooray for extension methods!


Originally, I was deserializing to a JObject, however JObject is also a JToken, so I wrote the extension method to be versatile enough to handle any of those other types.

Wednesday, July 6, 2016

Running Solr 6.1 as a Windows Service

Goal

Get Solr 6.1 running as a Windows Service and have it recover on OutOfMemory Exceptions.

Install Solr 6.1 as a Service

  1. Download NSSM
  2. Extract nssm.exe somewhere
  3. Create solr_start_6.1.0.cmd (this is a Windows Command file that does all of my configuration--yours will definitely be different):
    C:\apache\solr-6.1.0\bin\solr start -f -h node3 -c -p 8983 -z "zk1:2181,zk2:2181,zk3:2181" -m 128m -s C:\apache\solr-6-cores
    NOTE: The -f to run the script in the foreground.  I set the JVM heap size to 128 MB (we want this thing to crash and burn to test out OutOfMemoryException restart!).
  4. Test your script to make sure it starts solr and you can access your Solr Admin UI in a web browser
  5. Open a command window and navigate to your nssm.exe directory
  6. nssm.exe install "Apache - Solr 6.1"
    1. Application Tab
      1. Path: Select your solr_start_6.1.0.cmd from earlier
      2. Startup Directory: set it to the directory containing your script (should populate by default)
    2. Details Tab
      1. Display Name: Apache - Solr 6.1
      2. Startup type: Automatic
    3. Log on Tab
      1. Make sure you specify an account that has administrator-level permissions (Use your account if you're stuck here--but make sure to set it to something production-worthy later)
    4. I/O Tab
      1. I/O Redirection
        1. Output (stdout): Set this to something like path\to\my\solr\cmd\script\dir\solr-6.1.0-out.txt
        2. Error (stderr): path\to\my\solr\cmd\script\dir\solr-6.1.0-error.txt
      2. File rotation
        1. Check Rotate files
        2. Check Rotate while service is running
        3. Restrict rotation to files bigger than: (use common sense here, I did 5 MB, so 5242880 went into the box)
    5. Click Install Service
  7. Open Component Services and select Apache - Solr 6.1
  8. Start the service
  9. Validate that it came up by going to your Admin UI webpage

Make Solr Service respond to Out Of Memory Exceptions

  1. Navigate to this JIRA ticket
  2. Download oom_win.cmd and place it in your solr\bin directory next to solr.cmd
  3. Open solr.cmd in a text editor
  4. Find all the places where the script starts the server:
    1. Search for /solr_gc.log
  5. Immediately after /solr_gc.log, paste the following:
    -XX:OnOutOfMemoryError="%SOLR_SERVER_DIR%\..\bin\oom_win.cmd %SOLR_PORT% !SOLR_LOGS_DIR!"
    1. I had to replace two lines.  NOTE that this is just the manual way of applying the patch file associated with the JIRA ticket above.  If you want, apply it however you want.
  6. Now that we've made our changes, go ahead and restart our new Solr 6.1 service so it knows to kill the process on OutOfMemory errors.
  7. To force an OutOfMemoryError, query *:* and return 1000000 rows
    1. If you have a decent amount of content, this should force an OutOfMemory exception.  If you don't have a lot of content, do whatever you can to make it do a lot of memory-intensive work.  Perhaps consider lowering the JVM memory, too.
    2. You should see the web server go offline temporarily and then come back online
  8. Now that you've seen it restart and come back online, let's give the JVM a good amount of RAM so that it doesn't run out of memory every other request.  Just edit your solr_start_6.1.0.cmd file and change the -m 128m to -m 4g (128 MB to 4 GB)
  9. Save and restart the service
  10. Confirm that you have the new amount of RAM for the JVM by visiting the Dashboard tab in the Admin UI

Logs

When the OutOfMemory Killer runs, it generates a log file in the normal log directory.  Navigate to that directory and you should see a file that looks something like: solr_oom_killer-9000-2016-07-06_13_59_39.  Now you can know when this script runs and hopefully anticipate it in the future or make changes to not get it.

Final step (important!)

Do a happy dance!

Tuesday, May 24, 2016

Apache Solr - RangeQuery

RangeQuery appears to be a range query that operates on strings.  I'm not sure if it only works on strings or if the strings are a way for it to auto-determine what field type it's going to work with.
If you know how to properly use this, please leave a comment below.  I'm stumped on this one.

Fields

  • fieldName (required in hierarchy here or a parent)
  • lowerTerm - string value
  • upperTerm - string value
  • includeLower (optional, default true)
  • includeUpper (optional, default true)

Examples

Simply paste the following into the q= field in the Admin UI.

Here's a query that I came up with, but it doesn't work as I'd expect.

{!xmlparser}
<RangeQuery
 fieldName="price"
 lowerTerm="1.00"
 upperTerm="3.00"
 includeLower="true"
 includeUpper="true">
</RangeQuery>

The above query returns no results, which isn't what I was expecting.


{!xmlparser}
<RangeQuery
 fieldName="price"
 lowerTerm="0"
 upperTerm="10.00"
 includeLower="true"
 includeUpper="true">
</RangeQuery>

The above query returns 30 documents, of which includes prices > 10.00, which isn't what I expected.