[ April 2020; the SaaS part of this project has been installed under http://componentology.com and http://atrak.io, the current documentation is still totally valid, albeit from a more self-hosted perspective. ]
In this rollout we're closing up some important feature upgrades and implementing an original concept of ours into atrak.io in the form of a Ghost infrastructure for multi-tenancy security and optimisations.
We've completed the card-deck interface which is an optional interface now offered in the Listing interfaces, along with a calendar view which still requires some love. The Editing interfaces have seen a major upgrade with the now-working Content-Tools integration and the multi-linguistic facets which have been resolved through some intelligent UX design. This is also part of the optional features (meaning a user can elect to view a record in the view of his choice, its not an "additional feature at an extra cost" :)
Lots of additional documentation can be found under our blog section as well.
We're also introducing a documentation index to centralize relevant documentation for all levels of users. It can be found under the Documentation Index.
Our latest iteration of a complete 4GL system built on relational databases is out. Christmas time has been very productive for our team, and we've given the finishing touches to our latest integrated creations.
notes:
* although we have not implemented ACLs at the Field levels, each View can have different display settings for its fields, effectively granting the ability to mask fields in specific views. This is one of the elected limitations of the system.
**to avoid potential data-loss, the developer needs to manually refresh or submit the current view when in an Edit view
Currently on hold is the idea of hosting this in (name-your-brand) Cloud. Because we cannot certify their infrastructure against data-theft, we're prone on hosting this service unto our own (already capable) infrastructure for "serious" customers. This might present some limitations for customer acquisition and growth at first, but we feel it's the only way to stay "true" to our security concepts and philosophy. With that said, we do not foresee offering >free< accounts to these systems. Perhaps trials of some sort (refundable trial period most probably to show some proof of engagement).
For the time being, this engine is currently deployed internally for our own customers and their different, customized and dedicated, Extranets.
For starters, we feel that "Core" is an abused word on the Internet. Originally (20 years ago?) the name was quite suitable to our concept of one, core, object model to build upon a multitude of web offerings. Today, online search results only serve the purpose of creating FUD for a potential brand centered on one word such as "Core".
Magma is related to a dream of its lead architect. (a literal one) Always being fascinated by speleology and geology, said lead architect had the occasion to get closer to many volcanoes. They present a form of constance in geological times, yet are always moving, growing or reducing their presentation.
So, we saw fit to apply this name to this system, since that's what it does exactly. It presents, on the surface, managed data that can always be shifting in its constituents, location and presentation, yet around one single core still. Without having to resort to programming directly.
2020, well, the first stable version of what we consider the Magma Version was completed on the abouts of Jan 1st 2020.
Well, its back to business for the development team, other projects to maintain and setup, so we'll be pushing this new engine strongly in the coming months into new projects, to speedup prototyping time (immensely!) and start organising some major data security departments. All the while making the final adjustments to the language aspects of the interfaces, and probably some of the functionality based on user feedback.
Lots of documentation management ahead for us, so we'll definitely reuse this engine to set those up, internally and publicly. (Like this blog in fact).
And we are currently back-porting the FileManager module using a brand new dual panel interface.
Finally, the hooks are in place for the Automation to come in with big improvements at the design level. We plan to integrate another layer of backend systems including storage-clusters, backend data processing, data-streaming and web services.