Friday, March 3, 2017

Choosing a plugin

I follow this checklist whenever choosing or reviewing a chosen plugin to use on a Fullstack Software Development project. E.g. Javascript plugin or nodejs plugin.

Checklist:
  1. Gather all requirements for a project that a plugin needs to support
  2. Support: Consider the current supported browsers for the plugin. Is the author still making updates to the plugin on github? Is there a branch of a plugin that is now more actively maintained by another author? Is this the best plugin in terms of support? Does the plugin need to be supported or is it sort of set in stone (an example of this is Datejs.. hasn't been updated in several years but it still works just fine)?
  3. Consider how you would write overrides for any changes you may need to make to the plugin.
  4. Consider taking from the plugin only what you need. ie: seek to understand the parts of the plugin code you find useful and then borrow parts to use in your application code.
  5. Decide right now: are you using the plugin to quickly prototype something with means to replace it later or, will this plugin be permanent? Will this plugin be used in one part of your application or will it be included in various parts of your application? What is the expected lifetime of your project? Could there be future project requirements that may affect your choice? Consider the risks based on those answers.
  6. Can pieces of your application code that reference things specific to a plugin also be written like a plugin? E.g. Mixing a js plugin with a frontend bootstrap or framework. How can your code that is specific to combining a plugin with another 3rd party thing (ie: plugin or framework) be encapsulated and made independent of the rest of your application's code? Ideally, you do this in a way where it will be easy to update your js plugin, remove your js plugin, or replace the thing your combining it with (ie: another plugin or framework). When that happens it should be easy to update or even just remove this additional code you made because your encapsulated code was very easy to locate at the moment you chose to change a related piece.

Implementing a plugin:
  1. Is your plugin code in the same location as other plugin code?
  2. Will it be dead easy to update your plugin later on? How about removing it? Replacing it?
  3. What will the folder structure look like if you have to hang on to multiple versions of a plugin?
  4. Are you plugin overrides included in a place where they can easily be located? Is it obvious what version of a plugin they're overriding and is it done in a way that is similar to any existing overrides you have for another plugin?

If you must update the actual plugin code:
  1. Are you willing to contribute to the plugin author's github project?
  2. How will you make it absolutely clear in your application that you modified some plugin code? Can those modifications be called out in a custom plugin overrides file even if you write about them in code comments?

Wednesday, June 27, 2012

File Upload or Browse, Filepath or Filename Too Long

This is good to keep in mind if you develop applications or websites:

Websites or applications can safely handle 250 chars maximum for filepath + filename (sometimes less than 259, so 250 to be safe) when browsing to a file. I keep this in mind as a general rule of thumb, and it has helped me save some time when diagnosing problems with a webpage or an application.

Sometimes when a user is trying to browse to and upload a file on a webpage or application where the filepath + filename of the file is too long, it will produce an error or just fail silently. Either way, the user will not be able to upload their file. If providing support for this issue, I will usually ask them to provide the full filepath + filename for the file they are having trouble uploading, as well as send the file in an attachment. This lets me see if the filepath + filename exceeds 250 chars. This also gives me a chance to check out the file and see if there is a lock on it.

When an application has trouble accessing a file, a lock will often be imposed on it. In this situation, you will be unable to edit, rename, copy, or delete the file as you normally would through Windows. Either of these commands can only be accomplished through command-line, or with the PathTooLong utility explained below. In the case of providing support to a user where I do not have direct access to their machine, I will usually ask if they have the file(s) backed up.

Another non-user-related issue is when an application programatically crawls a folder or browses to a file where a filepath + filename is too long. Usually, this will produce an error that says something along the lines of, "filepath too long". Sometimes we are not talking about just one file, but many files where the filepath is too long. This error can be difficult to diagnose because it may only tell you about one file at a time as you fix them one by one, and at the same time the application may have just put a lock on all files it had trouble with.

In that situation, if you have access to the machine where the problem lies and want to know which file(s) are causing the issue, there is a great utility that can help. The trial version of the PathTooLong utility can quickly scan a directory and list off all paths that are too long. The full version lets you delete or copy files that it finds, and lets you do this even when an application has put a lock on the file. It is also possible for you to accomplish delete or copy of locked files through Windows command-line.

As a final note: You may be aware that Windows NTFS or Mac file systems allows for a much higher max number of chars in filepaths + filename, and that the max char limit can be separate for both filepath and filename. These facts do not really apply to most software or websites running on IIS because they operate on the older Windows file system technology. There are a couple of workarounds to this issue as far as application development goes but, I am not convinced that they work everywhere or are future-proof.

Again, as a general rule of thumb I just assume the 250 char max rule for all filepath + filename related issues. This makes it possible to provide a solution that works for every scenario.

Tuesday, August 9, 2011

Use WebClient to decompress a GZip JSON response from cross-domain API

Quite frequently I am using jQuery to access external APIs. The way you must access data changes between APIs and sometimes external APIs do not like cross-domain requests for JSON. A common way around this is to request JSONP, if the API supports it.

Take Twitter's API, for example: It returns JSON from a jQuery .getJSON() request very nicely.

Stackoverflow's API does not respond nicely to a jQuery .getJSON() request, because it is picky with cross-domain requests. However, they give developers the option to request GZip'ed JSONP instead of GZip'ed JSON, which does work with a jQuery .getJSON() request. Also, the fact that we know to expect GZip'ed data from the API will become important later.

Dave Ward's method of requesting cross-domain API data through an ASP.Net "Generic" HttpHandler creates a more reliable workaround without having to request JSONP. Rather than relying on JSONP to get around cross-domain issues, HttpHandler provides a reliable method of retrieving JSON from any API using System.Net.WebClient. His example of requesting the latest tweets from a Twitter user works very well, allows for caching the results and has other added benefits.

If you apply Dave's method to retrieve JSON data from Stackoverflow's API, for example, you will be able to request JSON through HttpHandler without the need to resort to JSONP. Retrieving the data works, but now we are dealing with GZip'ed JSON data. System.Net.WebClient's DownloadString() method does not know how to decompress GZip'ed data, and your .getJSON() method would throw an error.

I have built on Dave Ward's method of requesting JSON data by also decompressing GZip'ed JSON data (Please note, this is only intended for API responses that you know will be GZip'ed):

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Net;
using System.IO;
using System.IO.Compression;

namespace HttpHandler_Proxy.StackoverflowAPI
{
    public class UserData : IHttpHandler
    {
        public void ProcessRequest(HttpContext context)
        {
            WebClient stack = new WebClient();

            stack.Headers["Accept-Encoding"] = "gzip";  

            context.Response.ContentType = "application/json";

            string Id = context.Request.QueryString["Id"];

            if (string.IsNullOrWhiteSpace(Id))
                Id = "285714";

            object userDataCache = context.Cache["stackUserData-" + Id];
            if (userDataCache != null)
            {
                string cachedUserData = userDataCache.ToString();
                context.Response.Write(cachedUserData);
                return;
            }

            string baseUrl = "http://api.stackoverflow.com/1.1/";
            
            string request = "users/" + Id;

            byte[] gzippedResponse = stack.DownloadData(baseUrl + request);
            byte[] decompressedResponse = Decompress(gzippedResponse);
            string userData = System.Text.ASCIIEncoding.ASCII.GetString(decompressedResponse);

            context.Cache.Add("stackUserData-" + Id, userData,
              null, DateTime.Now.AddMinutes(5),
              System.Web.Caching.Cache.NoSlidingExpiration,
              System.Web.Caching.CacheItemPriority.Normal,
              null);

            context.Response.Write(userData);
        }

        // Method to decompress byte array containing gzipped data
        //   borrowed from: http://www.dotnetperls.com/decompress-web-page
        static byte[] Decompress(byte[] gzip)
        {
            using (GZipStream stream = new GZipStream(new MemoryStream(gzip),
                                  CompressionMode.Decompress))
            {
                const int size = 4096;
                byte[] buffer = new byte[size];
                using (MemoryStream memory = new MemoryStream())
                {
                    int count = 0;
                    do
                    {
                        count = stream.Read(buffer, 0, size);
                        if (count > 0)
                        {
                            memory.Write(buffer, 0, count);
                        }
                    }
                    while (count > 0);
                    return memory.ToArray();
                }
            }
        }

        public bool IsReusable
        {
            get
            {
                return false;
            }
        }
    }
}

The following snippets are what I added to Dave Ward's code in order to decompress GZip'ed results:

Decompress() method relies on System.IO and System.IO.Compression
using System.IO;
using System.IO.Compression;

Set accept-encoding header to accept stackoverflow API's GZip'ed results
stack.Headers["Accept-Encoding"] = "gzip";  

Our request to StackOverflow API to capture results becomes a 3-step process because we need to store the GZip'ed result in a byte array in order to decompress it and then turn it in to a string
byte[] gzippedResponse = stack.DownloadData(baseUrl + request);
byte[] decompressedResponse = Decompress(gzippedResponse);
string userData = System.Text.ASCIIEncoding.ASCII.GetString(decompressedResponse);

Method to decompress a byte array of GZip'ed data was borrowed from: http://www.dotnetperls.com/decompress-web-page
static byte[] Decompress(byte[] gzip)
{
   using (GZipStream stream = new GZipStream(new MemoryStream(gzip),
                          CompressionMode.Decompress))
   {
      const int size = 4096;
      byte[] buffer = new byte[size];
      using (MemoryStream memory = new MemoryStream())
      {  
         int count = 0;
         do
         {
            count = stream.Read(buffer, 0, size);
            if (count > 0)
            {
               memory.Write(buffer, 0, count);
            }
         }
         while (count > 0);
         return memory.ToArray();
      }
   }
}

Wednesday, June 22, 2011

Javascript - Child Class that Inherits from a Parent Class

Edit: For a far better and namespaced approach to what I am trying to accomplish here, please refer to Jaco Pretorius' Better JavaScript - User Defined Objects. He has based his methods off of Elijah Manor's extremely useful article covering best practices for JavaScript. I especially like how Elijah is able to expose common bad JS habits for why they can be harmful.


Defining classes this way (as a function returning an object literal) means they will always be public, including each of their members and methods.

// Person class
function Person(curFirstName /* string */, curLastName /* string */) {
    return {
        FirstName: curFirstName,
        LastName: curLastName,
        FullName: function() {
            return( this.FirstName + " " + this.LastName );
        }
    };
}

// Musician class inherits Person class
function Musician(curInstrument /* string */, curPerson /* object */) {
    return {
        Parent: curPerson,
        Instrument: curInstrument,
        Details: function() {
            return( this.Parent.FullName() + " plays a " + this.Instrument);
        }
    };  
}


var musician = new Musician('Gibson ES-355', new Person('B.B.', 'King'));

alert(musician.Details());


// You can add on new public members or methods
musician.InstrumentName = 'Lucille';
musician.MoreDetails = function() { return(this.Details() + ' named, ' + this.InstrumentName); };

alert(musician.MoreDetails());
   Run and execute the above code with JSFiddle


On B.B. King's beloved guitar named, Lucille:
In the winter of 1949, King played at a dance hall in Twist, Arkansas. In order to heat the hall, a barrel half-filled with kerosene was lit, a fairly common practice at the time. During a performance, two men began to fight, knocking over the burning barrel and sending burning fuel across the floor. The hall burst into flames, which triggered an evacuation. Once outside, King realized that he had left his guitar inside the burning building. He entered the blaze to retrieve his beloved $30 guitar, a Gibson semi-hollow electric. Two people died in the fire. The next day, King learned that the two men were fighting over a woman named Lucille. King named that first guitar Lucille, as well as every one he owned since that near-fatal experience, as a reminder never again to do something as stupid as run into a burning building or fight over women. - Wikipedia: B.B. King

Tuesday, March 8, 2011

Anything Ruby can do MVC3 can do better. jQuery can do anything better than Flex.

It has taken Microsoft quite some time to catch up to the benefits of Ruby on Rails, Flex and PHP, and they have been doing a great job combining the best of all worlds. MVC3, the soon to be released Entity Framework 4.1, NuGet, and HTML5/jQuery have created a buzz with developers.

Model-View-Controller (MVC) Framework created by Microsoft adheres to a design pattern that tries to keep web application development modular. Controllers are designed to handle HTTP requests and perform CRUD operations on a data model; View pages try to separate business logic and data layers from the presentation layer. Developers can tell MVC to scaffold View pages based on their Controller Actions, and the View will usually have access to public members of a data model; Database communication can be written within Controller Actions but, in an effort to keep the business logic and data layers modular, the developer can choose to reference methods of a repository that perform CRUD operations on the database. In my opinion, the business logic layer is not fully modular and cannot always be restrained to the Controller. Therefore, it is of best practice to keep a web application architecture document. This documentation should clarify a developer's or team's intent on using any design patterns or agile development methodologies and specify how they are used.

The MVC Framework arms .Net Developers with a global url routing system similar to that offered in Ruby on Rails. What this means is that you can write custom rules (or use the default rule: /Controller/Action{/Optional ID}/) to hide the true url path of your web pages and map the request to a Controller Action, which handles the loading of a particular View and passes in an optional ID parameter. Developer can also start building in MVC on top of an existing web site, leaving their older web pages untouched.

The latest version of the MVC framework is MVC3. Within it, Microsoft has added support for Razor syntax as an alternative to using native C# syntax for coding in the View Engine. Razor is another feature borrowed from Ruby and, not only is it faster to write and easy to learn, it is much easier on the eyes. Also, upon starting a new MVC3 project you will find it uses HTML5 doctype, and that jQuery Validation and jQuery Unobtrusive Validation scripts are already referenced in your Master Pages. Microsoft worked with the jQuery team to develop these scripts that provide better server-side and client-side validation of WebForm fields than did the recommended way to implement validation in MVC2. Currently, Microsoft's Entity Framework, for mapping data models to database, seems to be competing with code-first solution, NHibernate3. While I love Microsoft's Entity Framework 4.0, it does not yet offer a code-first approach to development. However, it has been announced that Entity Framework 4.1 will support code-first and I would much rather use Microsoft's implementation. I currently use EFCodeFirst (CTP5) package because it is built on top of Entity Framework 4.0, and by using it I expect an easy transition in to Entity Framework 4.1. It is nice to see Microsoft embracing the most sought-after features and, I am thankful for open-source technologies raising the bar and leaving this room for improvement.

Website and WebForms development has also become a lot more exciting. jQuery UI offers pre-made user interface elements that are very easy to implement in to the pages of a .Net MVC3 Web Application. There are also many jQuery "plugins" shared amongst developers, and some have become so good that the jQuery team has agreed to add them to their growing list of officially supported plugins. With a combination of jQuery, jQuery UI, JavaScript, CSS2.5 and HTML5, developers can create rich interactive web experiences very quickly. This creates a way for programmers to build as quickly as most drag-and-drop programs for website development allow. HTML5 offers many features that will be widely supported within the next few years, and may prove to be some tough competition for Adobe Flash. Personally, I have been shifting away from use of Flash whenever possible and putting my efforts towards HTML/jQuery. The reason for this is to develop cross-browser and cross-platform solutions and reduce dependency on browser plugins. And Adobe.. please stop charging so much for the Flash IDE. It crashes on me at least once a day!

Fast Development of Mobile .Net Web Applications that Adapt to Any Platform

Recently I have been developing web applications that are required to be accessed through smartphones, tablets, netbooks, or computers and support all popular types of browsers. Users accessing these applications are a mixture of management and workers or contractors that work outside of the office, in the field. Rather than developing a separate application for each type of user and for each platform, I chose to seek out the best possible way to develop a single web application that will adapt to any scenario.

Business Logic Layer (Controller) and Data Layer (Model):
  • Required Software from Windows Web Platform Installer 2.0: Visual Web Developer Studio 2010 Express or Visual Studio 2010 Professional, SQL Server 2008 R2 and SQL Server Management Studio 2010 (Express)
  • Required Add-ons from Windows Web Platform Installer 2.0: .Net 4.0 Framework, MVC3 Framework
  • Begin new Microsoft .Net C# MVC3 Web Application using Razor View Engine
  • Install reference package through NuGet: EFCodeFirst (CTP5)

First of all, I needed a fast way to develop an administration area of my web applications for company management to use. I decided to go with Model-View-Controller design pattern, code-first and DRY (Don't Repeat Yourself) development. .Net MVC3 is a framework built on top of .Net 4.0 Framework, and allows for very fast development of Controllers. I can scaffold Views from the Controller Actions that I write that act on data models. This lets me quickly develop CRUD (Create, Read, Update, Delete) that an administrator can use to manage their company data. I use Razor syntax in the View Engine which is new to MVC3. It provides an easier way for server-side scripting in web pages than the native C# scripting would allow, and is borrowed from Ruby. I use EFCodeFirst (CTP5) with Entity Framework 4.0 for a code-first and DRY approach to development. I can scaffold a physical database from my data models so that I am only required to make data model/database changes in one place. Also, I can specify client-side data validation requirements from within these data models so that validation only needs to exist in one place and this added logic can appear to be separated from my View pages. Since admins will usually be working from within an office, I decided that they will need to be on a computer with minimum screen resolution of 800x600. Tablets or netbooks will work well but, access through smartphones is not reccommended because of the small display area inherent in these devices.

Presentation Layer (View):
  • Required package: jQuery Mobile

Secondly, I needed a fast way to develop the end-user area of my web applications (and I separate end-user from administrator in this example). The web application will need to be accessed from many different platforms of smartphone, tablet or computer. I had created a few tests but found it tough to develop a separate design for each smartphone or tablet. The design needed to adapt to horizontal or vertical viewing orientation of the device, and I needed it to be cross-browser and cross-platform ready. After running tests to make decide on an open-source solution, I decided to go with jQuery Mobile. It is soon to be out of its alpha stage and ready for production. jQuery Mobile handles most of the cross-browser and cross-platform issues that would take months to develop for. It also makes it easier to storyboard out designs because of its fluid layout that will easily adapt to any platform. jQuery Mobile also makes a website look and work similar to smartphone applications, using big user interface controls and controls that react well to tap or sliding motions. jQuery Mobile supports all A-Grade browsers and some B-Grade browsers, including IE6. Their documentation is not ideal, and is written in problem-solution format, but it covers everything a developer would run in to. This is the most popular mobile solution right now, has plenty of funding for continued development, and has a large network of web developers using it. Because I have tried out jQuery Mobile for development of my own projects, I can understand the benefit of the updates being made in their alpha releases. Because I have researched the mobile market, I can understand the benefit of supporting certain types of older browsers, including IE6.

Window Web Hosting Service:

Finally, knowing my technical requirements I decided on a web host that supports .Net 4.0 Framework, MVC3, SQL Server 2008 and that has exceptionally helpful and responsive technical support. I went with WinHost and I highly recommend use of their service. I have been able to successfully host multiple sites under a single domain. Feel free to contact me with any questions regarding domain/sub-domain routing, database data types supported by WinHost, or any other questions as I have likely ran across a similar situation.

Using these methods, tools, and packages, I can significantly reduce my time spent on GUI design and database management. I can quickly develop an enterprise-level web application that meets the growing demands of today's market. It took me 1 week to test out different methods and decide on the best tools and packages to use, and now I am able to develop a complete web application within a few days. After development and through networking with industry professionals, I am confident that I chose the best options available.

Friday, July 16, 2010

[AS2] Understanding localToGlobal or globalToLocal

I don't feel that the Flash Help docs explain localToGlobal/globalToLocal very well -- I never fully understood it. I might have simply assumed that localToGlobal would always give me a point Obejct with values referring to the Stage ("_root") scope.

However, localToGlobal returns a modified point (x,y) Object related to the parent ("_parent") of the scope movieClip that calls the localToGlobal function. It does not return a point Object related to the Stage ("_root"), unless Stage is the parent of the movieClip triggering localToGlobal.


Example 1)

var point:Object = {x:truck.truckWheel._x, y:truck.truckWheel._y};

truck.truckWheel.localToGlobal(point);

// This modifies point Object values to reflect the coordinates within truck timeline, the "_parent" of truck.truckWheel.

// You now have values that show you coordinates of truckWheel (point.x and point.y) within the truck movieClip, and for most cases this is probably useless to you.


Example 2)

var point:Object = {x:truck.truckWheel._x, y:truck.truckWheel._y};

truck.localToGlobal(point);

// This modifies point Object values to reflect the coordinates within Stage, the "_parent" of truck.

// You now have values that show you coordinates of truckWheel (point.x and point.y) on the Stage, which is probably more useful in most situations where you would want to use this function.