Time to talk about the SMTP plumbing & those nasty vent traps!

DMARC stands for “Domain-based Message Authentication, Reporting & Conformance.” DMARC is protocol that uses SPF and DKIM to determine the authenticity of an email. SPF (Sender Policy Framework) is a DNS entry and protocol which provides a list of SMTP servers that are considered “permitted”  to send email for a domain. There are 2 “From” addresses in an email message: the envelope From (Return-Path) and header From. SPF verifies the former, however a regular user does not see “Return-Path” and “could” falsely assumes the domain from the header “From” as the sender domain.

SPF is really different from DNS “MX” records, which are also DNS entries, that in the reverse sense, tell sending email servers what IP address/s are to be “contacted” to submit emails for a particular domain. Think of it as “who is the” postman for a particular neighborhood, whereby you call that person/s up and say “hey, I have letter for somebody in your block, can you take that for me and give it to them?”

DKIM on the other hand is a way to “validate” that emails have not been tampered with once they left the domain sending the email. DKIM  is like a “stamp” or “watermark” if you want to think of it that way. Besides validating that an email was not tampered with, DKIM validates the domain in the “From:” header; and one can easily conclude its significance when considering spoofing. A message may have multiple DKIM signatures, every server it passes through may add its own signature, but only the one matching the domain in “From:” is worth considering.

Also, we need to point out that DMARC is perhaps the one “main” thing which makes DKIM signatures quite useful. DKIM signatures are not mandatory and a fake/invalid signature is equivalent to no signature.

DMARC is a policy reporting system that acts when both SPF and/or DKIM fail in order to help inform senders might be spoofing or phishing in ways that enable email SPAM.

Your DMARC record is published alongside your domain DNS records (SPF, A record, CNAME, DKIM, etc.). Unlike SPF and DKIM, a properly configured DMARC policy can tell a receiving server whether or not to accept an email from a particular sender. It is important to note that not all receiving servers will perform a DMARC check before accepting a message, but many of the major ISPs and enterprises do and its implementation is growing rapidly.

Some of the benefits of DMARC

  1. Publishing a DMARC record protects your company brand by preventing “spoofers  bad guys”  from sending mail from your domain. In some cases, simply publishing a DMARC record can result in a positive reputation bump.
  2. Using or ingestion of DMARC reports increases visibility into your email server by letting you know who is sending mail from your domain.
  3. DMARC helps the global email community to establish a consistent policy for dealing with messages that fail to authenticate. This helps the email ecosystem as a whole become more secure and more trustworthy.

What can you do to deploy DMARC on CommuniGate Pro?

CommuniGate Pro has built in features for DMARC and we also have provided some tools to help you get up and running quickly. We also have the best support for the platform by the people that code it! So, when in doubt or when you need some tips or tricks, you can also reach out to the team.

First you can start by setting up DKIM on your CommuniGate Pro platform by following the tips here:

DKIM with CommuniGate Pro

CommuniGate versions 6.2.x have DMARC built-in….see about “Check SPF records” in <Setup tips SPF & DMARC>

We also are providing a script for easy setup and performance of the DMARC features if required in older versions:

CommuniGate Pro script repository

Additional Resources

Vladimir Anatolievich Butenko – email legend

 

On the 29th of August, at the age of  56 years, Vladimir A. Butenko died. The founder of CommuniGate Systems and the Regatta.link project, is legend among the early pioneers of the Silicon Valley “.com” era that went from the early 1990’s into the turn of the 21st century. This period is when many of the core foundations of the Internet were established and created by a few great engineers.

Vladimir was known globally for his insanely efficient code that even the competition revered and spoke of in respectful terms.  Today, over 200 million people use his technology each day to communicate and collaborate on the Internet.

Vladimir Anatolievich was a bright man who left an imprint on the history of the development of software technologies…his works are the “bar standard” for reliable and efficient multi-threaded code. Back in the mid-1980s, as a graduate student at the faculty of Physics at Moscow State University, Vladimir developed the operating system MISS (Multipurpose Interactive time Sharing System). Subsequently, he moved from being the core developer and architect into organizing a group of like-minded colleagues who supplied the MISS OS with a powerful set of application software to the oil, gas and chemical industries.

In 1991, Vladimir Anatolievich founded the company Stalker Software, in Germany. He moved the head quarters to San Francisco in the late 1990’s; later renaming the enterprise to CommuniGate Systems in 2003 combining “CommuniGate”  as the company and product brand. In the early days the products were developed on and sold for the Apple Macintosh platform. By 1997, Vladimir started to create the unified communications platform CommuniGate Pro, when new ISPs were popping up weekly around the globe. CommuniGate Pro soon became a leader in the email communications business with 1.000’s of clients, each often times hosting millions of users. The platform was available on Solaris, Linux, HPUX, True64 and FreeBSD by the year 2000 and was soon followed by Microsoft Windows server versions.

For more than 30 years of software development, Vladimir surrounded himself with many millions of people; the developers, business users and followers of the product. His work on protocol level technologies was extraordinary and became known by the technical communities for the “tight code” and extreme performance. The CommuniGate Pro platform not only had scale and reliability from day one, but Vladimir began adding telecom capabilities with SIP and XMPP technologies by 2004; creating the only known “fully unified” communications platform to this day.

After massive growth in the marketplace on uptake of email and VoIP technologies, competition inevitably grew. Large systems went from millions of users to 10’s of millions and banks often times 50+ million users was “normal”. In order to set the record straight, on performance and scale, we decided to slap down any published specifications to give us tailwinds for a year at least.

We set up several public displays of the CommuniGate Pro Dynamic Cluster with HP demonstrating 20 million users; then later 50 million in a single image on an IBM zSeries mainframe! CommuniGate Pro went on to take the world record for performance on the SPECmail submission in 2005.  No other platform ever challenged those results and CommuniGate Pro remains to this day the world champion in terms of the SPEC.org rating system.

In addition to working on software development, Vladimir Anatolievich was a professional athlete and a prominent figure in the world of yachting. Vladimir began sailing as a child in the “Aurora” yacht club, and he received the title of master of sports of the USSR.

Vladimir Butenko also acted as a patron – supporting the sailing section in Strogino, which provides comprehensive support to the national team of Russia (sailing) during the World Championship in the class of 470 in San Francisco. (sponsored regatta Who is Who).

Since 2014 Vladimir worked on his new business project with excitement and passion – this was the creation of an automated system (Regatta.link) for the yacht racing competition marketplace with innovative technologies. The heart of the system is GPS – and it uses trackers with high accuracy and a system for transmitting data from the racing judge yachts gathered usual tactics format. This system has been deployed during the Open Russian Regatta in both 2017 + 2018. Regatta.link has already been fully deployed at several international competitions; including world championships.

We remember Vladimir created legend….we never forget….as millions of people use his artwork daily. Vladimir helped create far more than just great code. As the company grew over time, so did his team…..and inevitably families were born as a welcomed consequence.

CommuniGate Systems core values of reliability and efficiency are our founders vision of product excellence. Legend; CommuniGate Pro the unwavering penchant for reliability will continue to run non-stop as so it should.

Farewell to Vladimir Butenko will be held September 1, 2018

How energy companies hold the keys to Electric Car adoption

Most often the answers to complex issues are right in front of you. The problem of pollution, especially in places like French cities is caused by transportation. Namely diesel powered vehicles that spew out particulate and the scooters that have no filtration on their exhaust pipes. As much as we all recognize the source of the problem, the way to change this calamity is apparently elusive.

The blatantly known problem of urban parking is actually the pathway to widespread adoption of clean power cars…..

Who has the most to gain from electric car charging? The power company! Lets think about how much revenues the petroleum companies make today on transportation. Just poke you head out the window and breath deeply, smell that? The scent of diesel is the color of money for those guys.

Now, if you are the electric company, how do you increase your profits? Wait for the new versions of the iPhone to arrive and hope more people charge? Wait for the population to increase? 🙄

If a city really would like to have even say 10% of its population running around on EVs how do they do that? Wait for the car companies to do better marketing? Pufff……

What if the electric company realized that if just 10% of cars being electric powered means the revenues from the petroleum companies would now be transferred to them?  🤔

It seems clear to me that if a company like EDF would install charging/parking stations around major cities that are polluted (thanks to their competitors selling petroleum) they could link those stations with RFID badges to the EDF account nearly every citizen already has. Instant revenue growth.

When we listen to the problems of electric car adoption we typically hear about the range; but that overlooks the fact that EVs are most needed (because of pollution) in urban areas where 90% of the residents do not travel more than 50KM a day. Major cities should seek our partnerships with electric power companies to expand their revenue potential with partnerships for parking and charging. If city residents understood that they could park and charge inside the city with a power company badge, you will see electric car purchases soar.

European operators are poised to take control of the clouds over the EU

When discussing the systems used for email it can often times be quite different depending upon the generation of the person you are talking to. For many of us that have grown up in the last 30 years, email is a web based system. There are no apps or software to install like was the case for those of us that lived with Apple Mail or MS Outlook as the client.

Many of us grew up kind of expecting certain things like email accounts to be free….albeit provided normally by a search company trying to use us as advertising targets for their customers (outbound marketing clients). Most of us are probably are familiar with the term “nothing comes free” but the nuances of advertising were somehow “manageable” to avoid buying into a paid or professional system.

Whether you “see” email as a desktop program or browser what is common among all of us today is the issue of security and privacy. It seems a week cannot go by without some new revelation being disclosed about somebodies email or private information being released.

Compounding the issue of security and privacy for European companies is the control of “where” their email resides and under what or “who’s” laws are controlling the access to that data. Email often times for a business is data that contains valuable information about their company’s products or trade secrets. That means for anyone using a free system, those overlooked “click wrap legal agreements” typically blur the ownership and access rights because they need to use that data for advertising purposes. This can be a show-stopper for an organization that needs to protect not just its products and services information rights; but those rights of its employees too. For Europe, staff working in the company or public agency have rights under European regulation and the organization is more than compelled to oblige.

The cloud services topology and emergent WebRTC based unified communications has come quickly across the spectrum and brings with it rays of fresh sunshine on an otherwise impenetrable playing field. There is a massive opportunity for European hosting providers to offer Unified Communications in a private or National Cloud and compete head on with advertising based services that cannot offer localized support very well and have legal barriers to adoption for anyone concerned with European legal protection. Hosting companies that once saw large cloud service providers as a tidal wave dumping free services across their subscriber base now find themselves in a unique opportunity to add high value services within the legal frameworks of their respective markets.

CommuniGate Systems provides unparalleled stability and reliability in its hosting platform for over 25 years enabling more than 250 network operators to deliver quality inside their market place. Join our community today and talk to one of our regional representatives about how we can help you build a branded Unified Communications solution that is complaint.

Internet Communications is set to change society again and this time the cars will love it!

How times have changed since they taught us how to communicate in school. When I first arrived in the dot.com era  (late 90’s) to Silicon Valley many of the meetings were mixed method “mash ups”; to use a fun buzz word from those days! You see, not many of the banks wanted those of us in our 20’s to run things and have too much freedom with the millions of fresh cash in the bank! So the idea was to bring in “seasoned” experts on the top management levels.

Those bankers (read VC’s)  on the newly created “boards of director” scattered around the SF Bay Area were “keen” to drop in MBA’s & CEOs from all sorts of industries to “manage” us youngsters. Soon, by 1999 the startups  were popping up faster than a New York minute. Heck we had startups forming inside startups!  The weekly IPO lists were more eagerly watched than anything from the NFL.

The trouble was that anyone with a good amount of history and carrying brass clad “business degrees” never used computers much, and certainly not email to discuss and collaborate. So when they came to any meeting and saw all of us hovering over laptops with IRC and email “dings” clacking away all thru the meeting, the “seasoned” experts would put their pencils down on their lovely leather bound note pads and tell us to close the flaps on the laps or get out of the meeting.

Fast forward 20 years and whether you lived back there in the Internet foundation period or grew up since, all of us are bound in business and personal lifestyles with Internet based protocol communications. Much of the “plumbing” went thru stressful evolutions to deal with our addiction to media and consumption of inner connection social networking. I mean the amount of photos, video and streaming content for pleasure is astronomically mind boggling and will continue to climb at astounding rates as IoT devices come to play with our lives.

Recently…around one of the “watercolor” brain crunch sessions with colleagues the view ahead had to be de-fogged a bit to reveal what is in store for transportation systems and anonymous capabilities to not just drive, but share and communicate “amongst” the vehicles themselves. This technically is in a “ready” state now, and only adoption and legacy attrition confound the public awareness on how massively the changes ahead will impact our lust for “individualism” and ownership of objects like cars or social profiles.

A lot of focus in the media covering autonomous transportation paints the picture that “sensors” on cars is the “great feature set” of the future vehicle. But, seems to me three major areas of the evolution are “skimmed” over, maybe purposefully as they are revolutionary; each independently. First, if you stand back and look at a car, you can see that it is designed for human “senses” and communications. I mean, there are mirrors, flashing lights, sounds, and all sorts of gauges inside to tell the human “conductor” what the velocity is as you wind around the roads and “steer to avoid” other humans.

Perhaps the smartest thing on-board is the GPS Navigation system; even this has been “dumbed down” for the human interactions. The system usually has a screen to tell the driver what the route is, voice commands are two way on modern units and even some have the ability to overlay other sources of input besides the geo stationary satellites. Humm, that is the very interesting aspect of the “brains” in the car….being connected to external sources of information that can be feed into something other than the globe above our neck.

Imagine now we have a collection of networks, can be groupings of cars, can be IoT sensors in the roads, round points, garages and parking slots, each with an API that can be tapped or registered to, for “guidance”. Such network integrations are far-far more useful than a LiDAR or radar imaging that only “sees” in the ranges of the immediate proximity of the vehicle. However, these local to the car sensors can report what they “see” to other cars “out of range” or around the corner and progressing to where “cars ahead” are positioned and can be told of things like animals in the road, or debris that fall down a cliff which would not be “visible” until such a time that accidents could occur.

I am one that welcomes the freedom in the autonomous car more than I should having grown up in the California society that values car ownership as nearly a right if not a built-in tradition of maturity. I have come to realize, just like those bankers in the board room meetings that some pencils need to be broken to allow progress to occur. When cars use social networks to drive and talk among their peers or “spread the love” of sharing the view from the cockpit of a future car I “drive?” or is that “driven” ; to every sensor along the motor way will I no longer resist the lack of privacy to take a Saturday night drive along the promenade?

Certainly in the next decade “new and powerful capabilities” of communication among the cars & the IoT sensors of our world will become organized and that has has profound imminent probably to change our individualism. We should expect as “intelligence” evolves and is better to make decisions for us, there will be resistance to such change. Nevertheless, as time rolls on we can expect that as more efficiencies present themselves, this will help alleviate many of the generation issues we have let go on far too long; grid lock, road rage, parking unavailability, speeding fines, mistakes/accidents and maybe pollution as most vehicles will be electric.

The other good news for aspirational benefits will be the ability to compose more awesome photos and video clips in hands free driving mode.

Locking down access to webmail on your national or private cloud service

For many, if not most of our partners in the network operator & application hosting space the challenges of identifying the subscriber has become nothing short of a nightmare in recent times. Password based security by itself no longer is manageable, secure, and aggravates the user experience of the service by adding layers of frustration to the authentication sequence. Lets look at some techniques that will mitigate some of these challenges and provide some added assurances to the customer experience that the service security policies are effective, but also thoughtful of the impacts on the user.

We have in CommuniGate Pro a certificate authority built into the platform and for the purposes of our chat today we will focus on using TLS certificates in combination with multi-factor authentication to lock down access while providing a reasonably smooth user experience. We should point out that this type of model is typically in demand for business subscribers, especially governmentally regulated industries, i.e. banks, air transportation, government agencies and healthcare are especially pushed to conform to certificate based security topologies. One of the challenges of the topology is the management and setup of the devices which is normally mitigated when the system is operated by IT departments and “bring your own” devices are not permitted, unless they are put under such management.

So what the heck am I talking about in simple terms? Certificate based TLS sessions are where the client system, laptop, desktop or mobile have installed a signed digital certificate that is presented to the service during the SSL connection session. That means that the computer trying to “login” must also have this certificate to present to the service as a means of determining that the computer itself is authorized, not just the user credentials.

 

The illustration above shows  a typical deployment at a network operator with a Pronto! based web access method for the subscribers. In the private cloud all the users must conform to 3 added policies to “get into” the service:

  1. Password and user challenge / response is met with a biometric scan using our multi-factor API and mobile client
  2. TLS certificates must be present on the client machines and presented to the server
  3. The network that the machine is coming from must be on the list in the server policy

Furthermore the network operator has also placed several “good practice” policies on the public access network for reception and transmissions of messaging content. In many cases we are seeing that inner agency traffic, for example from the police to the justice department are required to also present TLS certificates. Adding TLS certificates on the SMTP sessions is highly recommended to “tamp down” the flames of SPAM and create policies that control what you want coming in versus the model of “cleaning up” the junk after it arrives with a open SMTP policy.

It should be stated as sometimes there is confusion about “email encryption” and if that means encrypting the mail itself or the transportation of the mail. For our purposes of this discussion we are talking about  TLS / SSL and that is all about the “transport” not the encryption of the email itself. Email encryption is performed in the CommuniGate Pro platform with a cousin technology called S/MIME that we will discuss in another blog posting.

In the hosting model we have two “doors” that we are talking about locking or having keys for the user to open. First is the “web access” or Pronto! webmail that combines all its communications over a HTTP/S session. That means we can send/receive mail, VoIP calls, and perform actions on the calendar or directory with a single socket connection; in this case HTTP/S using the TLS certificate to control whom and from what system can open the door. The other doorway we are talking about controlling is public or external to the “domain” in question. That means if in my example of the police and the justice department are on separate services, each can install the TLS certificates on their respective SMTP/S configuration profiles and lock the doors for any abuse or fraudulent attempted access.

While most of our talk is centered around the installation of TLS certificates on the access computers, we should not lightly skip over the way the user should authenticate. In the end of the day, if a computer is stolen, or accessed by an imposter, all they need is the fingers on the keyboard. Often times security breeches are performed by persons with mal-intention within the organization or on the periphery, like a partner or even staff with access for cleaning and maintenance of the facilities.

Password based authentication is only designed to determine that a password is “correct” not that the person is actually who they say they are. Biometric authentication in a multi-factor policy is the best method today for adding a layer that is far more precise, but also simple to use compared to lengthy passwords that are difficult to enter and remember for the user. CommuniGate Pro has build a simple to user and re-brand mobile client for TouchID on iOS devices and Android systems with biometrics sensor features. We can also “fall back” to secure session data code transmissions or least secure SMS code validation but strongly suggest that biometric scan policies are enforced for the most reliable and traceable security policy on your cloud service.

REFERENCES:

CommuniGate Pro PKI infrastructure

Tips for protecting the SMTP session on CommuniGate Pro 

 

The ever increasing opportunity for National Cloud value added services

Seems to me…… where & “with whom” you “float your ballon” is just as important as what “type” of ballon you have to fly. Translating that into the terminology of Cloud Computing; what legal rights you have based on who’s stuff (service) you are using might be more important than the type of technology you have for security. That means if I use great passwords or encryption, it might be less important than if all my “stuff” is at the end of the day controlled by legal agreements I submitted to knowingly or un-witingly.

For the purpose of this post we want to put aside technology to another discussion or topic, meaning lets chat about the benefits or ramifications of security, i.e. encryption or access controls another time.

The underlying subject of security that often times gets overlooked completely when discussions are made about cloud computing is the legal umbrella you might be walking under when using a cloud based system or service. Most of us click, and few of us read those EULA’s that come with all the popular email, chat or voice/video systems in use today. Often times these “agreements” include in one way or another the accord that “by using this service you explicitly agree that the jurisdiction controlling this usage License is xxxx”. Furthermore, many of these click-wrap agreements (for free and paid cloud services) indicate that your rights are forfeited and you should stop using that service if you do not agree to the jurisdiction.

Many telecoms and network operators have a massive opportunity in that they more than many can provide a National Cloud that has a lot of benefits for not just the public, but we also find many governmental organizations demand a local provider. I recently was speaking with a “post office” that uses one of our partner network operators for email. Kind of ironic huh? I mean mail man using email, but OK jokes aside they too must have a way to communicate electronically by inner-agency messaging systems, and want those to be “housed” inside the country domain, both physically and legally.

I have found that at root or the core of a value for many of our partners is the legal ability (licensed operator) to issue phone numbers. Over the top services have in many cases overwhelmed many operators globally. But “nationally” the potential is just as it is today with phone numbers if you think about @Internet address space that can be nationalized. Many of the weaknesses of technical limits can also be overcome when a domain is controlled, regulated and managed on a national level.

Take the example of a provider issuing internet address space on a National Cloud for email. Not only can the legal use License be placed under local laws and regulations (benefit for business owners), but security and abuse can be managed far more than un-managed public messaging services. Simple case, a user or domain is fake or sending abuse mail, it can be de-commisioned. Adding to this, the National Cloud operator can add value by certifying the origin of the mail, the contents of the mail (not having been tampered with) and much more, making email professional and far more trustworthy.

With over 200 Network Operators as partners, we have a unique visibility on the values of Unified Communications as a Service and understand what not to do, what works and what does not work. If you are a service provider and are interested to provide high value business communications in your region we have a unique way to work; as a partner relationship not a vendor/client. We listen and we adapt to your local requirements better than most.

Time to “geek out” a little on storage!

Lets take a few minutes to chat a bit about storage systems and how that is super important for a good CommuniGate Pro hosting platform deployment. One big mistake with storage we find in just about every deployment is the misunderstanding of how the usage pattern and loads of CommuniGate Pro differ from that of say a database or file server in an Enterprise deployment. With that in mind, how about we set the record straight on how we like our discs and arrays to taste and digest?

CommuniGate Pro is highly adaptable in scale; up or down. The same binary can be deployed in situations where there is nothing more than a single server and internal drives, up to a Dynamic Cluster that has multiple storage systems; some being internal to the physical machines, and some attached as shared storage, or with many arrays “attached” as logical systems that are presented to individual domain or realms. To set a target benchmark or “good example” we will use a multiple system architecture and a typical topology that is “common place” in our partner networks.

Let us begin with a case study of a regional Network Operator that has 1 million subscribers and sells broadband services to both consumers in residential deployments and also provides business subscribers broadband dedicated links. For the residential subscribers the operator provides email as a bundled service. For the Business subscribers the operator provides a richer experience business grade unified communications suite with value added services like VoIP, premium groupware style email and storage. As you can imagine the “load profiles” of the subscribers will be radically different even in the cases whereby all subscribers are using Webmail access (no IMAP/POP or SIP); the two example “group types” of services will place different requirements on your SaaS platform and as a result the storage subsystem.

Lets round it all down to some simple numbers for the purposes of this chat:

Our example network operator: “super duper broadband”

Total subscribers: 1 million

  • 900k consumer subs in a single dedicated domain “superduper.com”
  • 100k business subs spread across 1,000 domains ( like “bluedonuts.com” & “dieselcroissant.fr”)

Profile type: medium load (90% of the subscribers are consumers that do not login all day)

Quota type: 100meg on consumers and 1gb on the business subscribers ( <30% utilization)

*Concurrent Web sessions (https + XIMSS): 70,000

SMTP inbound per hour: 800k

SMTP outbound per hour: 200k

Estimated IOPS on Backends: 7,000

Estimated Storage capacity total: 40TB

Capacity planning begins with understanding your customer and their usage patterns. For example, the consumer subscribers can radically vary in usage patterns because some people buy a ADSL line and get 5 email accounts bundled and never use them. But the accounts over time become filled with junk mail or notices eating up storage for no good reason. On the other hand, older people tend to use the email accounts provided by the operator and stick every photo and document they have into their folder tree. So when the account is “active” we might find they want and “expect” the speed and usability of an enterprise messaging solution.

Business subscriber profiles obviously will have completely different usage pattern; linked to the “work hour” & attachment penchant that users typically require as the messaging system becomes used almost like a fileserver. The email “storage and archival system” for business communications is fundamentally important to business in general and operators can find this as a “value add” offering. Finding a “weighted” profile is key; only your Network monitoring will provide useful parameters that are provided by your subscribers. There are many “optimization” techniques and policies that we provide our partners to deploy and manage more effectively the storage and load characteristics of the CommuniGate Pro SaaS delivery platform.

Perhaps one of the most commonly deployed CommuniGate Pro Dynamic Cluster “layouts” is the 4 x 3 architecture where you have 4 physical or “bare metal” servers on the front side that are using internal storage with SSD drives . These servers typically use storage for logs and do not need a large amount of capacity if you have set up log rotation in a CRON job or have systems management to deal with housekeeping. Keeping in mind that the load patterns of the Frontend servers are far more CPU intensive today in light of the fact that WebRTC sessions and https to the edge are now using SSL or other encryption techniques. On top of this are loads that anti-abuse filters, policy management rules and traffic shaping engines place on the frontend server array.

The example layout has 3 Backend servers which are the only systems in the Dynamic Cluster that have connections to the “shared storage” where the account account reside. This is usually mounted as /var/CommuniGate/ on each machine in the Backend server configuration/s. That means that the “Frontend servers” do not have access to the shared storage, and when they want to access an account they make the “request” to the Backend array and when authenticated, the Backend/s decide which server will open the account directory and give that info back to the session the frontend system is controlling (thru webmail for example).

As you can imagine the load on the backend servers will be far more on the IOPS side of the equation compared to the Frontend servers (CPU heavy) that are dealing with sessions doing authentication and/or encryption; which equates to CPU cycles. Therefore, we can also propose that when deciding on your network switches and interfaces the Backend severs should always be placed on dedicated switches and use 10G ports or fiber channel.

The above diagram shows a properly configured Dynamic Cluster network with 4 dedicated IP ranges and layers ideally on dedicated switches. The following is a sample Networking topology, but this should not replace talking with our engineers as part of any production Dynamic Cluster.

  • Public Network – This is the externally facing Network, on a routable IP block, normally with one or more IPs assigned to the Loadbalancer on the “external interface” All Frontend Servers in the Dynamic Cluster will have one Network Interface with an IP on this network and a DNS entry.
  • Inner Cluster Communications Network – This private network, using non-routable IP address blocks (such as 192.168.x.x.) is used for the cluster members to communicate with each other. No other traffic should be put on this private network. Frontend servers should have a second interface configured with an IP.
  • Management Network – This private network might be the shared LAN of the Operator or ISP NOC (Network Operations Center). This could be another non-routable Network (such as 172.16.x.x.) Each server should have another network interface configured so administrators can have access to the Administration UI or APIs for provisioning, billing, and other management duties.
    • Note: There may be times when a fifth network is used for management of the server at the OS/BIOS level. Many Sun and HP servers have a “lights out” port that can be connected to secure VPNs or terminal servers used to gain access to the machine in the cases where there are connectivity issues or the server hardware or power has failed.
  • Storage Network – This private network, with a non-routable IP block (such as 10.10.x.x) is used only by the backend servers to communicate with the shared storage system. This network should be high speed, Fibre or 10GE.

We will not dig too much deeper into the networking other than to say we want the storage LAN to be dedicated or without other traffic. We also strongly recommend that the storage network be 10g and use SSD’s whenever possible. NAS has advantages both economically and for a reduction in complexity as we do not use file locks in the Dynamic Cluster and our performance is orders of magnitude better than most NFS applications that need the logic on the filesystem.

Back to our case study reference point and we have arrived finally at the point to talk about what new toys for the geeks to rack up. When thinking about the 900k subscribers it might be totally reasonable to use a “spindle based” NAS solution, while for our business subscribers we put all those domains and their storage on a SSD based rack. CommuniGate Pro has the ability to move  domains or accounts to Logical arrays that are mounted as specified in a preferences file in the directory of the domain.

For spindle based NAS systems we find a few things that will often “trip up” the purchasing or specifications. RAID level 5 should always be avoided, and when possible we like RAID 1+0 (a.k.a RAID10) and yes this will double the “spindle count”.  Nevertheless the stripping over a rack of drives gives us the IOPS we want; more economically. Often times a storage vendor will view CommuniGate Pro like an enterprise load and not have a good picture of how the Dynamic Cluster operates. In fact they might be “honestly” looking to save some coin by using RAID levels that boost capacity.

Another area that is important in the specifications of a large NAS system with many spindles is to choose disc that are the fastest and not the largest. If the target is x on size or “Terabyte capacity” having 4 LUNS of drives, where each is say a cabinet of 24 drives is far better than discs of double capacity and only having 2 cabinets or 48 drives versus 96.

Lets say then we have a NAS head on 4 cabinets of drives or 96 usable spindles. Using an LVM we span the LUNS into a single volume and present that to CommuniGate Pro for /var/CommuniGate and stripe over all the drives in RAID10 to achieve the best performance in IOPS whereby we exceed what would be possible in other configurations.

Our business subscribers will get a link to the optimal storage array when their domain is created. Meaning the domain is created that will send their profile over to a SSD based storage system that can have other value added services like backups, archival rotation and encryption policies.

A properly configured Dynamic Cluster has the potential for 100% uptime; we have dozens of examples whereby the operator partner tells us that CommuniGate Pro is the longest running and most stable app in the data center. In some cases the only “major upgrades” to the system are a result of the hardware vendors EOL or phasing out support for the servers. Several of our partners have Dynamic Clusters up for 8+ years non-stop. That being said, another good tip for your architecture is to plan your change management and how to deal with load sharing during spikes and peaks.

The CommuniGate Pro Dynamic Cluster has a “rolling updates” mechanism that allows software or hardware to be serviced or swapped with no downtime. In our Dynamic Cluster the “server systems” or “cluster members” can be updated one by one when the administrator takes the cluster member into the “non-ready mode”. The cluster member when put into this “non-ready state” stops taking new connections and can be upgraded or the hardware can be changed entirely. In addition, you can easily add in new cluster members to deal with loads, or switch operating systems and hardware vendors. It is possible to run mixed systems; such as FreeBSD Frontends and Linux or Solaris or AIX Backends.

One thing to remember is when we take a cluster member offline for service, you are removing the load capacity that member provides to the entire Dynamic Cluster subscriber base. In our 4 x 3 example the backends are each providing 33% load and when you place one Backend member into “non-ready” mode, the remaining systems are now in a 50/50 load situation. These Backend server members should be designed to deal with that load in maintenance or hardware failure situations. It is far better to have “more cluster members” versus having big servers that deal with a large % of load. My rule of thumb is to never have more than 33% load on any cluster member, and that is “peak” load, not nominal operating parameters.

REFERENCES:

IOZONE tools for checking IOPS 

Linux FIO tool description

Performance on Linux in virtualized environments

CommuniGate Pro benchmark with IBM

CommuniGate Pro SpecMail results

SpecMail benchmark topology 

The EV might be the French answer to clean air compliance

One really “welcome” aspect for climate change here in France is “how we will meet the goals” of our climate change commitments when most of our greenhouse gas emissions come from our cars & scooters, not power production as would be the case in most countries. That means if we are going to do our part, we will have to cut the use of petroleum based transport systems massively!  To even put a “small” dent into the pollution, whether you count that pollution as CO2 or other toxic emissions, our cars will need to be electrified and/or shared in some new “type” of system. This, in and of itself, is one of the most encouraging things for me as it definitely means the EV will have to be taken seriously.

Unlike most of the countries in the EU, and even I would dare say the world, almost all of the cities in France are plagued with air-toxicity levels that are directly linked to tailpipes, not smokestacks. Exacerbating the problem is noise pollution, road rage linked to over-saturation of vehicles, and of course our favorite daily activity, the battle for parking that is simply nonexistent. All of these things cry out for a better understanding of how EV technology is well suited for people that live inside a city or its perimiter; a topology that would not have to fret over EV range or charging time constraints.

Moving to an electric based transportation system for inner-cities is simply common practical sense; yet embracing adoption by consumers is maddeningly slow moving. We indeed have installed many stations for charging EV’s in most of the cities around France; but alas many of these sit empty and/or under-utilized.

One of the things you learn by living in France is how much we are an “electric” society. Everything in your daily life from the cooktop, oven, heating and even the BBQ is electric! It took me a while to adjust to an electric BBQ on the poolside being from California, and even longer to explain that it is common place to my family and friends back state side. So, if we are so entrenched in electricity and the prices are reasonable and the generation is “clean power” what are we waiting for?

If you happen to live in a city, you know that most of the city is cluttered with parked cars doing nothing. You also know that finding a parking place is a skill that one has to acquire by living in that city for a while and learning the secrets. Adding to the equation is the “cost” of parking; which is either quantifiable by excruciating painful frustration in life, or monetary dispersement thru fees and of course fines. I am of the belief, having owned an EV for nearly 4 years, that parking is one of the “golden keys” to attracting or adoption of EV ownership. It was the sole reason I purchased my initial EV and remains a big motivational factor in the ownership of my car today as I can charge cheaply or for free in many shopping centers and public places.

Advocation does not only have to come thru generous “subsidization” of ownership per se. I do believe during the nascent phase of the evolution of the Electric Vehicle ecosystem, some rebates are a matter of due course. However, I think that we could do far more by simply adjusting how we look at prioritization of parking for EVs and some stronger regulation on dirty cars entering the densely populated urban area as have become common practice in Paris, Munich and NYC.

Many of our EV charging stations around Nice in the south of France are packed on any given Saturday; but before you get excited that EV use has taken a foothold, have a closer look at the photo. Those are not EV’s in the slots……..I typically find 9 times out of 10 the charging stations are filled or blocked with diesel cars because they the parking spaces are  too irresistible for drivers searching for a space to pass up. The EV stations are clearly marked in bright green color with big clearance space to maneuver, they are typically clean (no oil drops) and usually close to the entries of buildings, just like dedicated handicap parking spaces. Yet, when I have asked drivers why they park in these EV spaces and not the handicap? they often tell me because in the handicap spots you can get a ticket or have your car towed!

So, while enforcement sounds great, it is a resource that costs and is difficult to monitor. I am of the mind that if we had more EVs on the road and in the stations, they in some ways would be “self-policed” and would become more valued when utilized. That does not mean I think we should ignore abuse, not at all. I actually think the fines are the wrong way to go and it would be better to hit the true “pain point” of drivers; the penalty points on your drivers license.

Fact: Far less than .05% of the cars in the inner-cities are EVs on the road and we probably need someplace around 40% to change the toxicity levels in any meaningful way amounting to meet our goals. Another frustrating phenomenon is that EVs are even under attack form ecological minded people themselves! Many ECO-techies confront me when I park to charge and say my EV makes more Nuclear power usage! For some, on the side of “green energy” it is far better to have diesel and move our Nuclear power to say gas or worse coal (like in Germany); brilliant idea!

Even with the decommissioning of Nuclear based power plants and adding in energy production from solar, thermal, and wind generation systems, we have a zero effect on greenhouse gas emissions in those swaps. Thus, we will need to slash our petroleum based transport significantly to meet any objectives we have committed to.

It will be interesting to watch how we deal with our pollution problems here in France and if in fact the Electric Vehicle will be the golden goose or the dead duck. We have signed up for a hefty amount of change, in terms of cutting our emissions and if we hold to that agreement, this obviously means the diesel is going to be on some presentation deck, in some meeting room, being shown to some political  leadership as the main root that has to be pulled from the dirt it is based upon.

REFRENCES:

EV sales and use statistics: https://en.wikipedia.org/wiki/Electric_car_use_by_country

 

 

 

 

The Grand Prix goes Electric

 

The geeks in us had to check out the 2nd “electric Grand Prix” this year in Monaco to see how technology has transformed the way cars race. Indeed the organization has transformed the way we think about EV’s from being a “cheap toy” type of car that is not strong or powerful enough to be in a “real race” into not just a contender, but perhaps the future of racing!

For most of us geeks that detest traffic jams and screaming road rage drivers, we eagerly welcome AI taking over the drivers seat. What was really interesting to learn in the ePrix 2017 event was the potential for IoT and neural network technology that is already being leveraged to squeak out seconds of time in a race. Many of us geeks know that Lidar and camera based “sensors” in EV’s that are indeed better than our “biological sensors” in the sense that they can detect “stuff” in longer or wider ranges than our eyes, ears or even nose. But, the “mesh” of data and processing it all to make decisions on “driving behaviors” and changes is evident in the horizon. That is really exciting when you consider cars will soon communicate among themselves and make decisions on their own, based on inputs from data they collect from other cars, systems like “waze”, IoT sensors in the road itself, or satellites (GPS) that will enable EV’s to cruise thru traffic 100x more efficiently with no road rage. Well at least until AI gets feelings!

The technology in the ePRIX cars is nothing short of fascinating when you have a look in the cockpit and dream about the forces pushing you back in the seat, or pulling you thru a curve. The HUD info connected back to the network is really tricked out cool and you can check out some Youtube videos like the following to get an idea: https://www.youtube.com/watch?v=T4_nitVagGU

One of the things you “become” in owning an EV today is like some kind of “spokesperson” as people stop you in the street to ask about the electric car at charing points. EV adoption is still in the nascent stages; I mean the industry of EV’s is still a transition from toxic air producing vehicles to systems that are probably a bit better inside cities especially. But adding to the entire “eco-friendly” aspects of the EV is the technology that is just flat out “cool”; especially for kids! And who is nothing short of a “kid” than a adult geek playing with cars?

We spent some time at the e-village and learned how the cars work, and the kids in all of us got to try out some of the BMW cars and examples of technology.

So the question that came to us a few times was “are they fast”? We geeks can assure you the Tesla “ludicrous”  button gives a lot of inspiration, but the throttle on the ePrix cars is one I would love to push!