Found a surprising issue in Xtext.
Apparently, Xtext generator has this AntlrToolFacade class that tries to download http://download.itemis.com/antlr-generator-3.2.0-patch.jar and save it in current dir as .antlr-generator-3.2.0-patch.jar to then load classes out of it.
Which is pretty crazy.
First of all, if you use Nexus or Artifactory to make sure your builds do not depend on thirdparty resources being up – this breaks it.
Second, downloading something over HTTP without any checksums and stuff is as unsafe as you can think. The issue occurred in a first place because Itemis started to redirect from HTTP to HTTPS link on this path – but the code that does download can’t even follow redirects :facepalm: And the “fix” that has been made was to disable redirect and still serve the JAR over HTTP (counts as a quickfix, but otherwise – :double-facepalm: ).
And in general, having URLs hardcoded is just ugly.
So, quickest solution is to simply commit antlr-generator-3.2.0-patch.jar as .antlr-generator-3.2.0-patch.jar into the root of your project – so that it would not be downloaded during builds.
If you have a better idea though – drop me a comment ;-)
I’ve just run into a weird problem where RabbitMQ server was running EXTREMELY slow on my laptop for no apparent reason.
After trying a bunch of things and googlig for a while I’ve found out that possible reason for that might be slow host resolution. But everything was running on a localhost!
I’ve checked my /etc/hosts and found out… Well, when I’ve upgraded my MacBook Pro laptop I’ve imported all stuff from previous one (using Apple’s migration assistant), but also did change the new laptop’s network name to avoid conflicts with the old laptop being on the same network. But I did not update /etc/hosts accordingly – it only had 127.0.0.1 associated with the old network name.
Apparently, this is a big deal for RabbitMQ. I’ve been running it like that for more than a month and had no issues with any software whatsoever so far – but RabbitMQ (or possibly the underlying Erlang VM) was doing some special name resolution using my computer network name apparently, and that just didn’t work. Causing it not to report any errors though, but just run EXTREMELY (I mean it!) slow.
There is some weird issue with Internet Explorer WebDriver that I’ve encountered in IE8 (not sure about newer IE versions) – the image with URL “about:blank” causes WebDriver to be stuck forever in “1 item remaining (downloading picture about:blank)”.
Looks like it has never been resolved.
It also turned out that in my case “about:blank” images were coming from Progressive IE which seems to be using it for background images for some reason (see screenshot of part of PIE2 code for an example of such image).
Disabling PIE in test mode solved the issue. Though other people had it in different cases with different workarounds.
Be advised. Continue reading
Eclipse WTP Tomcat and JBoss adapters have a certain bug that I’ve encountered recently, and due to conditions it occurred in it took almost a detective work to figure out.
I added a video below with all the details and a little demo – just for fun.
Long story short, the bug occurs on starting the server (be it Tomcat or JBoss) from Eclipse. When starting it Eclipse WTP Server Adapter starts a thread called PingThread that tries to connect to the server. If connection is OK, server is considered to be “Started”. Otherwise it’s shown as “Starting”, and if there’s a startup timeout configured, after the time of timeout passes server gets killed.
A rather important subject today: how to make Spring IoC/MVC powered portlets work with Liferay services (the ones built from service.xml by calling ant with "build-service" target in your plugins SDK) in Liferay 5.2.3.
– Spring IoC/MVC
We’re developing portlet with Spring IoC (and maybe Spring MVC) using Plugins SDK. We call create script -> get our sample portlet -> modify web.xml to have Spring WebApp context listener -> if we’re using Spring MVC, modify portlet.xml to use Spring Dispatcher Portlet.
Also in WEB-INF/lib we put spring.jar, and if we’re using Spring MVC also spring-webmvc.jar and spring-webmvc-portlet.jar (and maybe utility stuff like velocity jars etc).
– Liferay services
Now we want to have some persistent data and be able to manipulate it, and since Liferay provides nice facilitation for this in form of so-called "Liferay services" we’ll use it. So we create service.xml and run "ant build-service".
After that we add some logic that will use generated services, hit "ant deploy" and… get some exceptions about Spring context already being initialized for this webapp!
Liferay SEO capabilities seems to be surprisingly weak when it comes to URL management. Consider an example: you’re trying to build a webapp that will be doing some abstract searches over some search data sources, and present the results on one page.
You want page to have URL like http://<host>%5B:<port>%5D/section/subsection/search/<keyword>%5B?someParam=<value>%5D
Particular goals: URL can be generated by other website that knows nothing of our Liferay-based portal internals, and it (URL) should be nice and bookmarkable.
On the page you want to have some portlets, provided by different development teams/vendors, that would get the keyword and present results. The portlets should be independent since new ones can be added over time, and you want to be able to order development of several new portlets in parallel via several independent vendors. Thus every portlet on page should be able to obtain <keyword> and <value> passed in URL to page.
Support for logical operators in ElasticPath search phrase is one of those features that are probably not really needed in most cases, thus not very beneficial. And together with somewhat sloppy implementation that adds potential for errors, such feature is more likely to lead some unaware end-users to frustration than be of any use for them.
The proper implementation of logical operators support would require parsing search phrase, identifying logical operators, and building corresponding SOLR/Lucene query condition. But this would be rather heavy. Much easier is to allow key phrase pass to query condition directly, and let SOLR/Lucene parser deal with it. Naturally, if the condition will be ill-syntaxed and unparsable, error will happen on SOLR side, and will be harder to deal with.
What happens to your out-of-the-box EP6.1.2 (or ALCS6.0) when you do searches like these: Continue reading