Monthly Archives: December 2014

Site Migration: done!! Take that 2015 :D

What do you think of this post?
  • Awesome (0.0%)
  • Interesting (0.0%)
  • Useful (0.0%)
  • Boring (0.0%)
  • Sucks (0.0%)

Links in his heaven; all’s right with the world wide web 😀

I just finished checking up links and content on the blog and site, hopefully everything will be working now.
Also I deleted a bunch of posts…I hope I didn’t broke the INTERNET 😛

Happy new year.

What do you think of this post?
  • Awesome (0.0%)
  • Interesting (0.0%)
  • Useful (0.0%)
  • Boring (0.0%)
  • Sucks (0.0%)

Moving to OpenShift Host

What do you think of this post?
  • Awesome (0.0%)
  • Interesting (0.0%)
  • Useful (0.0%)
  • Boring (0.0%)
  • Sucks (100.0%)

From Godaddy To OpenShift

I am moving to other host, I will try to check all links and review everything so until I am finish it might be a lot of broken stuff here.

Motivation

When I started my hobby website I decided to buy a cheap host, so I found Goddady with a nice price, at the time a shared host was very cheap and cheaper if I buy it for 4 years, so I did and I got stuck with Godaddy for 4 years. Now, after four years the host is going to expire, witch for me is not a bad thing because I wanted to move to other host that has better support, better management tools and nodejs would be a plus. But, since its a hobby, I didn’t want to spend much money on it, fortunately I found very good solutions being one of them the OpenShift.

Why OpenShift

When I started looking for cheap hosts with nodejs I always got to cloud servers, there was two with free plan that actually caught my attention OpenShift and Heroku.

Both of them are kind of similar, both have console management tools, git and are scalable. On Heroku I particular liked how easily you can manage and share the resources used by each application..on OpenShift if is possible its not so obvious.

Anyway, after trying to setup my wordpress blog on both, I quickly decided to go with OpenShift for the simple reason that the free plan as MySQL DB needed by wordpress and it has more DB space. Heroku free plan has a PostGresSQL DB and wordpress doesnt support it out of the box.

Until now OpenShift is …

…Great :D, I am really happy with OpenShift:

  • Free Plan: The free plan is a great feature on any service, because it lets you try it and see if it fits your needs. Besides the motto “Pay as you grow” make sense to me, its a great way to get customers and a good marketing tool. But yeah I like it because I don’t have to pay 😀
  • Git:For any project involving developing its great to have a version control system, and its great to use it as a deploying system as well. OpenShift uses git as
    the primary tool to deploy applications live.
    On my “real” work I always update my services and websites to deploy with git, its a simple matter of pull the master branch. Normally I use 2 “standard” branch’s: master (production ready branch) and dev (merge master ready branch). Any other developments are made on other branch’s.
  • Certificate Authentication: Tools like git and other OpenShift tools can login with a certificate so you only need to set it up once on your computer and then you never
    need to worry about authentication again 😀
  • Management Shell tools: OpenShift applications can be manage with a single shell command tool. The “rhc” command can perform a lot of management task, like
    setting up your account certificate, and a lot of other stuff. I especilly like the command
    rhc ssh , it simple starts the ssh shell where your app is.
  • Nodejs It supports nodejs :D…

There is probably a lot more great features that I will discover with time.

OpenShift DNS pain…

Attention that OpenShift is not domain name registrar or provider/management, but I need to put my domain names to work with it.
The biggest problem I found was to configure my domains names to work as they did before.

The problem is that I am using fsvieira.com to point to my page and web services, I recently discovered that this kind of names are called naked or root domains, and apparently is not a good idea to use them on the web stuff.

The problem is that I can use a cname (alias) to a sub-domain, like www.fsvieira.com points to , but I cant use an cname (alias) with fsvieira.com, this limitation
seems not only be on the domain providers but also on the RFC, and with good reason, because
root domain name can be used on other services.

Anyway, so I made a sub-domain www.fsvieira.com and point to my cloud, then I redirect fsvieira.com (301) to www.fsvieira.com with forward only because masking is really bad, it makes a iframe with source pointing to your redirect address, so the url will never change on address bar and if you have rest service send json it will be bad.

Redirect seemed to work ok, everything I access fsvieira.com it was changed to www.fsvieira.com and everything looked good.

But when I try to use my (ajax) rest public service (with cors enabled to allow everyone) it fails. It works for www.fsvieira.com but not with fsvieira.com because of the redirect.

So, the solution was to make a CNAME for fsvieira.com to point to the cloud, and I did it with
cloudflare, it is very easy you don’t need to transfer the domain to be able to use their services and for what I saw there is plenty of interesting services that they provide.

Conclusion

OpenShift is a great hosting solution, even for hobbyists. CloudFare is great to manage your domain names and do stuff that you cant do with other domain providers.

If I knew what I know now I sure would have used the www.fsvieira.com to point to my services,
it would have been much more simple to change them ;).

Thats it, I hope to finish the migration of my website very soon, and sorry for any broken links and pages.

What do you think of this post?
  • Awesome (0.0%)
  • Interesting (0.0%)
  • Useful (0.0%)
  • Boring (0.0%)
  • Sucks (100.0%)

zebrajs update, testing and testcase generation

What do you think of this post?
  • Awesome (0.0%)
  • Interesting (0.0%)
  • Useful (0.0%)
  • Boring (0.0%)
  • Sucks (0.0%)

ZebraJS Update

The last few days I have been rewriting the variables lib, this updates includes:

  • Added examples (exemples/logic/add.js), from my previews post,
  • Better memory management with version system (commit, revert, remove) instead the old dumb stack (save and load) system,
  • simplified the code to achieve better performance,
  • Testing and debugging libs:
    • lib/variables_test.js: Monitors all lib operations and throws a exception if it finds a inconsistency on the lib,
    • lib/variables_test_msg.js: Monitors all lib operations and stops the program with a error message,
    • lib/variables_testcase.js: Monitors all lib operations and stops the program with a error message and generates a testcase for mocha.

The sudoku puzzle takes now, almost 3 seconds to solve and check for all possibilities:
real 0m2.837s
user 0m2.496s
sys 0m0.038s

I think its good since we have to take into account the time nodejs takes to start up the program.

Testing and testcase generation

When we rewrite part of a program bugs will always appear especially on big rewrites, that’s why having a test framework is useful, we run all the test and fix everything that doesn’t pass the tests, then we run our examples and they all fail and burn lol…
So I think its good practice to find bugs and write some more test, this will prevent future fail, and will also confirm your bug suspicions…The problem is when you write a bunch of testcases but all of them passes, meaning that the bug is more tricky than you would expect.

So after I got tired to write tests, I tried to find a good tool to help me, but I found none, the problem was:

  • Debugger: Node-inspector doesn’t work very well for me and besides debugging requires a little work and inspection to find bugs,
  • Profiler: I cant understand the output of profilers and cant get the information that I need,
  • Symbolic Execution: I have been reading about this, seems great but didn’t find anything for nodejs,
  • Other options: There is probably other options, I just made a quick search didn’t bother to look how to personalize the tools to make what I expected…shame on me 😛 (any suggestion or comments are welcome)

So I decided to take on other approach, I made a proxy class, that has this features:

  • Monitor all calls of a object,
  • For every call made it constructs a call trace with the provided arguments,
  • Extensible with the functions:
    • before: execute before call,
    • after: execute after call,
    • error: execute if a exception occurs
  • Run the trace again and generate a minimal test case.
  • Setting up proxy trace should be only a matter of changing the require libs to proxy libs, no other changes in the code should be made.

And this is how its working for zebrajs lib:

  • I created a proxy to Variables and VariablesFactory,
  • On after and before calls I check my variables integrity with the should.js lib, if something wrong occurs I throw an exception,
  • On my error handling function I call proxy.sandbox to generate a small test case where the error occurs,
  • I go to one of examples, and change the ‘require(“lib/variables”)’ with my proxy ‘require(“lib/variables_testcase”)’, and then run the example like this
    ‘nodejs my_example.js > log.txt’. If a error occurs it will present me with a testcase on the bottom of the log.txt.
  • I copy the testcase to my mocha test/ folder, normally I change the testcase adding more tests, check the expected values and change them to what they should be, ect, etc…
  • I then correct the bugs, check if everything is right and keep the testcase for future development 😉

Until now I think this proxy object is pretty useful to test “real” programs/examples, of course not everything is great, proxy lib may have bugs (it probably has, since the code is a mess), the check functions may have bugs, the performance is decreased a lot and some testcases may take some time to generate.

Here is a testcase that was generated (This error occur as normal exception of js, the test case generated was really big, I then added some checks to catch the bug early…):

 it("should f_0 commit", function() {
                var f_0 = new VariableFactory();
                var v_71 = f_0.v({domain:[1,2,3,4,5,6,7,8,9], });
                var v_80 = f_0.v({domain:[1,2,3,4,5,6,7,8,9], });
                should(v_71.notUnify(v_80)).eql(true);
                should(v_71.notUnify(v_80)).eql(true);
                /*
                        invalid version number < 0
                        f_0 commit
                */
                should(f_0.commit()).eql(0); // I changed this from -1 to 0 since it was the correct expected value,
  });

This is a little trick bug, since it didn’t occur if I comment one of the lines with notUnify, It was cool to see that the test case generated was
exactly the smallest to the error occur, it made my day lol...
And that’s it 😉 ...

Testcase generator algorithm

The testcase generator is a pretty dumb piece of code, when a error occurs I do the following on proxy.sanbox:

  • It grabs only the calls on the root of the trace, its not interested on internal calls, and do a list,
  • After call list is created it will setup a "clean" and "controlled" environment to execute the calls,
  • Then it runs the call list, ignoring calls from the bottom of the list, if the error still occurs than it can remove the ignored call,
    it continues to do this until it reaches the beginning of the list,
  • It then creates a string, as a mocha testcase, with the remaining calls in the list, since it has all information like returned values it can
    create assertions of the expected values, of course some of them may be wrong but that’s must be checked by the programmer, and a wrong assertion is
    definitely a bug.

Well that’s it, Happy codding.

What do you think of this post?
  • Awesome (0.0%)
  • Interesting (0.0%)
  • Useful (0.0%)
  • Boring (0.0%)
  • Sucks (0.0%)