The framework was originally built to be secure from the first brick. Thus the name, which originally was representative of how we perceived the framework as being the "core" of our (then) new security mindset.
This framework was born in the 90s. About the same time that Internet became a public notion. But don't let that fool you, the framework has seen many linguistic iterations and security improvements, it's not that old-school.
The very first framework was built on Oracle using PL-SQL, the idea was borne out of a necessity to supply a malleable web solution for the technical support department where the perl programming was a little breakneck. FileMaker was popular back then, and was a great visual inspiration for the rest of my works on this framework.
CORE is very attached to the OWASP security guidelines (even if OWASP figured them out way later than us), resulting in a framework that's OWASP-compliant, and beyond. For an in-depth article about our certification compliance, you can consult this other article, (@todo insert link to security-compliance article here…).
First and foremost, CORE is meant to be a website and intranet framework. The intranet framework really serves as the baseline for all webapp development, and the internet aspect of it is really the "publishing of public stuff", which you can manage through the Intranet. Then the notion of Extranets comes as an add-on, as a form of public, yet, authenticated published web site. Therefore, the Intranet serves as administration interface for the rest of the sites.
That being said, the framework currently supports an unlimited number of sub-sites (Intranet, Extranet or Internet), so at the base of its user management and security sits the Session class object. Our session is somewhat different from the typical sessions, in that it doesn't necessarily require write access to any server resources, in its Internet facets. It was built from the ground-up (back in the 90s) to be a server-less session. For Intranet and Extranets, we found that it just works better if we keep things somewhat in-sync with the database, at a minimum for user activity tracking, and doubly secure authentications. So at a minimum, even the Internet session will make a database call at the moment of processing a login. But if you're not dealing with logins, a strictly Internet website will never resort to database queries for handling sessions. Making our session object portable on server farms, allowing the first form of horizontal scaling since the birth of Internet, and most probably the reason why we never used server-based sessions in our lifetime. Server-based sessions are such … a waste of time for the sysadmins.
So, an Internet website normally doesn't force any form of authentication, unless the visitor wishes to complete a registration, for a purchase or a reservation of some sort. Or, unless you're programming a social network website. So, our Session class object, normally present in every pages, simply tracks the user through a cookie object, which serves as a simple identification token. If the visitor is to login to access his personal interfaces, then the session will validate the user's password, against the database, and re-emit a new session cookie containing said authentication ticket.
In Intranet and Extranet environments, we normally require that the user identifies himself first before accessing any content. The session class object will intercept any users without a valid token and redirect them to the login page, with directions for redirecting the user to where he was, if the authentication works out. In these scenarios, the session cookie content is beefed up with additional typical user information such as; real name, username, user ID, organisation ID and some useful flags. All our cookie contents are ciphered using PKI encryption (it is a drag on the servers, but in our view of things, servers are made to handle "security", and the framework is inherently expandable on the horizontal, making the extra cpu costs worthwhile, for us at least). And thus, our Sessions are ciphered on the server before being saved in the user's cookies.
Within the framework, the Session object is the first instanced object, along with its dependency, QueryDB. In Intranet and Extranet sites, by default, Session will require a signon token to be present to allow users in. We put extra efforts in the framework to make certain that no code, parameters, leaks and exploits be present in situations where a valid token is not present. We do not recommend changing this for any reasons at all, if you're thinking you have to deactivate the signon requirement, then you're building a Internet website, not an Intranet or Extranet.
These particular settings (RequireSignon, local_Site_Type, BindUsers_to_Their_Sites, etc...) are all found in the *conf.php file associated with the site in question. See our Intro to the Core Framework for a look at these details and the Core Module wizard generator article to see how we automatically build modules ready to use.
The storage of passwords is accomplished using Argon hashes, so the hash complexity can also be configured and tuned to the server resources. (From light to intensively grueling), password validations are made using hashes against hashes, using a leak-free methodology (that which doesn't leave traces that could help break passwords if the server is p-owned).
The framework also supports our concept of StrongStrings, another of our new and very useful class objects ( found under \CORE\StrongString() ), which allows the handling of sensitive information in highly restricted memory spaces. It also provides exploit-free string comparison and length computation methods for the paranoid programmers. StrongStrings are used to capture passwords and pass them on through our object architecture, in a safe manner. StrongStrings cannot be examined directly, they will not dump, nor output their content to debuggers. It should be noted that our StrongString objects are purely server objects, we do not have the capability at the current time of serializing a StrongString between different environments (on our @todo list though).
This covers session handling, user authentication and password handling.
Following our vision of keeping things very simple, our framework comes out of the box with 3 security levels. Typically this configuration has been proven to be sufficient in all our web application projects, combined with the modular permissions, it seems we've struck a perfect balance.
If you're a PHP programmer integrating your own works in the framework, then typically you'll need the Admin privileges. Web designers will normally have User privileges, and the Admin would granularly grant edit permissions on the content. Managers are more typically middle-men, having supervisory rights on users, orders, customers and such. Managers can be granted access to modify the environment by an Admin, and they can delegate access to individual Users through the modular permission settings.
In the framework there is the notion of Modules, which was in our grandfather versions since day 1. Modules are structured hierarchically under Intranet and Extranet web roots (Internet sites being an exception, they don't sport modules). Each module being a folder in itself, is defined by the existence of a file aptly called module.cfg which is defined in an ini-style format (yes, ms-ism, but it was nice, back then. The format just stuck around in our framework). Within this module.cfg file, we can, and should, define a minimum amount of settings, used to uniquely identify the said-module, give it some humane display names in different languages, and optionally define a minimum access level, along with a mega-list of additional settings which doesn't concern us in this article.
Typical module.cfg file content (the /Backend/Application Logs module in this case):
id=SysLogsentrypoint=syslog.phpicon=fa-exclamationname.en=Application Logsname.fr=Logs Applicatifname.es=Logs de la Applicacionaccess_level=admin
The last setting in our example, access_level, determines the minimum access level required to access the current module. This feature is well ingrained in our framework currently and works like a charm. For module writers, the module.cfg file is the place to put your framework baseline settings for portability between apps. A simple copy&paste of your module hierarchy allows you to implement across different web applications in a matter of seconds.
In addition to the module.cfg settings, the framework also stages a cached copy of these settings on first load in the database. Once the settings have been stored in the database, they are then reloaded for rendering from the database firstly. If additional settings are found in the module.cfg files, which are not defined in the database, then they are merged with the current database settings, and re-saved immediately to the database. Database-stored settings can then be modified through the GUI by those users with the required privileges.
This is where things can get tricky. And why we strongly suggest keeping it simple at first, the framework is built to accept tiny additions module by module, and even component by component.
Technically, you should always start off with a Master Intranet Website, this Intranet website becomes your launch platform for administering additional websites later, but initially, it's sufficient to start rolling out your own Intranet.
In theory, if you'll be rolling out Extranet websites, then for sure you require an Intranet website to administer your Extranet(s). The framework is build for this; be it for ghosting sites (ie; branding, personalizing, customizing) where you sell and service Extranet systems for your customers, who are in the business of selling online to their own customers, for example.
And, you can deploy and attach as many Internet sites as your heart desires as well. It could be that you would also desire to allow your Extranet customers to deploy their own Internet sites, also in the framework.
Internet sites don't directly compete in your infrastructure, each site can have its dedicated database system. Where things are necessarily better done centrally is at the Extranet level. Extranet users share the same database and table space as the Intranet (by default, this can always be changed), because it just makes it easier to administer Intranet and Extranet users in one place. (For logical reasons as well, our framework's current brew just works better like this, because Intranet users can also be "assigned" to an Organisation, and have access to their organisational Extranet users.)
To sum it all up, the Master Intranet Site is typically where you locate your corporate users and trusted collaborators, and any site that attaches afterwards become manageable by your Master Intranet users, depending on the permissions you grant of course.
Sub-site users cannot see, or administer content in sites above them. The framework is built this way.
And technically, its not possible to have 2 Master sites being able to configure each other. Well... it is possible, but we never tried such monstrosities. It is possible however to install a higher-leveled Master site, say if your original environment mushroomed beyond your technical guidelines for some reason. Or for logistic/legal reasons perhaps.
Our security overview wouldn't be complete without mention of how we handle user inputs. For one, most experienced Internet programmers probably know by now that programming on the web is more than 50% defensive programming. If your coding doesn't include 50 lines for handling a contact form's input, then you've certainly been owned by now.
Our framework is built very defensively, where it counts the most. We recognize that Internet sites are much more prone to hackers than Intranet or Extranet sites. That is because within our framework, we require a signon token to access any functionality beyond the login form. But on Internet sites, programmers are a bit more at the mercy of their own experience (or lack of thereof), and many other external factors. Fame to name but one.
Within the framework, we've built everything on the QueryDB object for database access, this object takes care of converting data input to non-malignant form before going to the database. The hiccup is we also convert the data back to its original form when displaying it, and we go through some great lengths to avoid script injections in the user environment. Our technique is quite simple in fact, (OWASP probably details it now too), we simply convert all quotes, double-quotes, pipes and other characters that might matter in the database to a proprietary benign format. The QueryDB class object furthermore offers conversion functions that take care of the actual necessary quoting, so we normally quote everything that would be bare in SQL commands, and thus avoid another level of mishaps.
We find that it's more useful this way, without having to rely on pre-compiled queries and parameter loading. We could one day revise our QueryDB object to implement such methodology, and it'll be a week-end job for revamping all our managed sites, nothing to it. But for now; we feel that the methodology is more inline with our internal object usage.
Another helper tool is offered in the Session class object, namely the get_RequestVar() method ;
/** * Session_base::get_RequestVar() * Retrieve and FILTER an incoming _REQUEST["variable"], we'll keep a digest of what we find, so save this digest in case of trouble * with hackers. ;) * @param mixed $varname variable name to lookup in the _REQUEST[] array * @param string $expected_value_type [string|numeric|positive|mixed|array|json|allinclusive|fileupload|mime-attachment|hashed] * @param mixed $strict_format [A9A 9A9] [AAAA999] [AAA*999***AA] will chop the input to the format specified, trailing data is ignored. * @param mixed $maxlength maximum length (in bytes) of the input, leave null to leave it un"trimmed". * @param mixed $regexp_format [ a valid PHP regexp for preg_match (eurk), not used if you specify a numeric/positive type or a strict_format ] * @return mixed $value or null. */ public function get_RequestVar($varname,$expected_value_type="string",$strict_format=null,$maxlength=null,$regexp_format=null)
Typically used to filter "simple" values like integer identification numbers, simple strings, postal codes and such.
If the submitted value doesn't fit the mold, the method will simply return null, thus avoiding a gazillion nightmares.
Our personal recommendation for serious web programmers is to limit and always limit the amount of data input coming from the web, never consider it safe, and if you must absolutely accept user input, filter each and every field very carefully. Contact forms are notorious for crumbling apart once spammers figure out your system.
We typically run our web applications on OpenBSD, using the built-in binaries or our own version of OpenHTTPd web server. OpenBSD comes with its httpd server chrooted by default, so that saves us a headache of a setup. We still have to chroot a couple of utilities and sockets on top, but there's a script for this in the installer, and a recipe in the Tlaloc system to accomplish this.
Our recommendation is; if you're going to run this framework on-site on your network, then invest in deploying the framework on OpenBSD physical servers. Anything else would dilute our security brew really. (seriously!). It might be possible to setup the framework on Linux (or Windows if you dare), but the complexities of maintaining storage spaces on Linux can be daunting. I normally recommend to keep it as simple as possible, and OpenBSD definitely fits the bill.
Me and most of my code-relatives work on OpenBSD, do yourself a favor, check it out. You might like it, and donate to the OpenBSD community! (I accept hardware donations as well)
So, in SaaS or PaaS format, or simply hosted through us, you'd get the Framework running on OpenBSD, bare-metal. If we have the time we might distribute our framework on OpenBSD virtual images. But the OpenBSD packaging system would present a better route.
Chrooting is a very old concept by the way, by now Linux manages it relatively well, OpenBSD "perfectly", and Windows, well... its Microsoft. They have it in different forms and names; UAC isolation, Virtual spaces, Hyper-V, Sandbox, Client Virtualisation... and so on. A big mess.
So while chrooting takes root in the wild, we can focus our efforts on deploying our own chrooted web servers, and establishing hierarchies of chrooted environments. Because we can, and probably should if we're going to setup more high-risk Internet sites.
In my certification book, chrooting Internet web servers is an obligation. For an Intranet site perhaps a bit less of an obligation, but a very strong recommendation. But personally, I'd say fear the local users a lot more than the remote ones. And if you do it with OpenBSD, a total no-brainer.
We've long ago decided that the framework would be supporting application-level cryptography. The reasoning being that since we can only control and manage the application-level, assuming we have no access to the hardware or its operating system (because of cloud or virtual environments for example), then it would be best to do the cryptography inside the application so that data remains ciphered at all times when being transmitted or stored on the back-end or mid-tier servers. This has a number of side-effects on the security posture;
The cryptographic services are provided by the class BlackCipherBox which we also develop in parallel to this project. The cryptographic engine uses the libsodium calls provided in PHP, and our class provides a wrapper for our own internal constructs. This is detailed in another article; Black Cipher Box.
Usage of cryptography within the framework is primarily tied to ciphered database fields, and hashed fields. Through the user interface it is possible to configure the ciphering for independent fields, and the engine will automatically handle the ciphering using a configurable key generation routine. Keys are conserved in ciphered format in the main database (in the table SYS_Key) and limited to a number of uses determined by the industry best practices for the specific algorithms. (As of May 2022 we're still tuning these parameters).
Programmers can later decipher these database fields by scripting their data interfaces using the QueryDB object, and having the BlackCipherBox class hierarchy available in the autoloader. The QueryDB object provides facilities for Ciphering, Deciphering and Hashing strings.
And we're working on introducing the BlackCipherBox to the File handling classes as of May 2022.
Keys are not stored in the file system. They are stored in the main database. Programmers can choose to export the keys for safekeeping or initialization scripting, or porting specific keys between applications through global configuration variables. (Its safe! They are encrypted.) Only issue as of this date is that portability is not entirely debunked, there remains a requirement for converting the ciphering on the keys to the target system's Master-Password. (We depend on a symmetric cipher to wrap the keying material). Nevertheless, developers are free to implement their own pre-determination on key usage. Our development on the ciphering front is continuing for a foreseable future, stay up to date with our articles.
A script is available to initialize (or re-initialize) the cryptographic system inside the application's /Scripts folder. It supplies instructions on how and where to paste the generated key configurations. (A master password is generated for the ciphering of the keys, a global Hash key unique to the web application and a starting Session key.)
We also abuse cryptography with our very proprietary and cloud-friendly session handling routines. But there's nothing to do for integrators at this level. One just needs to properly configure their config files to make sure the necessary base keys are present.