Internet Communications is set to change society again and this time the cars will love it!

How times have changed since they taught us how to communicate in school. When I first arrived in the era  (late 90’s) to Silicon Valley many of the meetings were mixed method “mash ups”; to use a fun buzz word from those days! You see, not many of the banks wanted those of us in our 20’s to run things and have too much freedom with the millions of fresh cash in the bank! So the idea was to bring in “seasoned” experts on the top management levels.

Those bankers (read VC’s)  on the newly created “boards of director” scattered around the SF Bay Area were “keen” to drop in MBA’s & CEOs from all sorts of industries to “manage” us youngsters. Soon, by 1999 the startups  were popping up faster than a New York minute. Heck we had startups forming inside startups!  The weekly IPO lists were more eagerly watched than anything from the NFL.

The trouble was that anyone with a good amount of history and carrying brass clad “business degrees” never used computers much, and certainly not email to discuss and collaborate. So when they came to any meeting and saw all of us hovering over laptops with IRC and email “dings” clacking away all thru the meeting, the “seasoned” experts would put their pencils down on their lovely leather bound note pads and tell us to close the flaps on the laps or get out of the meeting.

Fast forward 20 years and whether you lived back there in the Internet foundation period or grew up since, all of us are bound in business and personal lifestyles with Internet based protocol communications. Much of the “plumbing” went thru stressful evolutions to deal with our addiction to media and consumption of inner connection social networking. I mean the amount of photos, video and streaming content for pleasure is astronomically mind boggling and will continue to climb at astounding rates as IoT devices come to play with our lives.

Recently…around one of the “watercolor” brain crunch sessions with colleagues the view ahead had to be de-fogged a bit to reveal what is in store for transportation systems and anonymous capabilities to not just drive, but share and communicate “amongst” the vehicles themselves. This technically is in a “ready” state now, and only adoption and legacy attrition confound the public awareness on how massively the changes ahead will impact our lust for “individualism” and ownership of objects like cars or social profiles.

A lot of focus in the media covering autonomous transportation paints the picture that “sensors” on cars is the “great feature set” of the future vehicle. But, seems to me three major areas of the evolution are “skimmed” over, maybe purposefully as they are revolutionary; each independently. First, if you stand back and look at a car, you can see that it is designed for human “senses” and communications. I mean, there are mirrors, flashing lights, sounds, and all sorts of gauges inside to tell the human “conductor” what the velocity is as you wind around the roads and “steer to avoid” other humans.

Perhaps the smartest thing on-board is the GPS Navigation system; even this has been “dumbed down” for the human interactions. The system usually has a screen to tell the driver what the route is, voice commands are two way on modern units and even some have the ability to overlay other sources of input besides the geo stationary satellites. Humm, that is the very interesting aspect of the “brains” in the car….being connected to external sources of information that can be feed into something other than the globe above our neck.

Imagine now we have a collection of networks, can be groupings of cars, can be IoT sensors in the roads, round points, garages and parking slots, each with an API that can be tapped or registered to, for “guidance”. Such network integrations are far-far more useful than a LiDAR or radar imaging that only “sees” in the ranges of the immediate proximity of the vehicle. However, these local to the car sensors can report what they “see” to other cars “out of range” or around the corner and progressing to where “cars ahead” are positioned and can be told of things like animals in the road, or debris that fall down a cliff which would not be “visible” until such a time that accidents could occur.

I am one that welcomes the freedom in the autonomous car more than I should having grown up in the California society that values car ownership as nearly a right if not a built-in tradition of maturity. I have come to realize, just like those bankers in the board room meetings that some pencils need to be broken to allow progress to occur. When cars use social networks to drive and talk among their peers or “spread the love” of sharing the view from the cockpit of a future car I “drive?” or is that “driven” ; to every sensor along the motor way will I no longer resist the lack of privacy to take a Saturday night drive along the promenade?

Certainly in the next decade “new and powerful capabilities” of communication among the cars & the IoT sensors of our world will become organized and that has has profound imminent probably to change our individualism. We should expect as “intelligence” evolves and is better to make decisions for us, there will be resistance to such change. Nevertheless, as time rolls on we can expect that as more efficiencies present themselves, this will help alleviate many of the generation issues we have let go on far too long; grid lock, road rage, parking unavailability, speeding fines, mistakes/accidents and maybe pollution as most vehicles will be electric.

The other good news for aspirational benefits will be the ability to compose more awesome photos and video clips in hands free driving mode.

Locking down access to webmail on your national or private cloud service

For many, if not most of our partners in the network operator & application hosting space the challenges of identifying the subscriber has become nothing short of a nightmare in recent times. Password based security by itself no longer is manageable, secure, and aggravates the user experience of the service by adding layers of frustration to the authentication sequence. Lets look at some techniques that will mitigate some of these challenges and provide some added assurances to the customer experience that the service security policies are effective, but also thoughtful of the impacts on the user.

We have in CommuniGate Pro a certificate authority built into the platform and for the purposes of our chat today we will focus on using TLS certificates in combination with multi-factor authentication to lock down access while providing a reasonably smooth user experience. We should point out that this type of model is typically in demand for business subscribers, especially governmentally regulated industries, i.e. banks, air transportation, government agencies and healthcare are especially pushed to conform to certificate based security topologies. One of the challenges of the topology is the management and setup of the devices which is normally mitigated when the system is operated by IT departments and “bring your own” devices are not permitted, unless they are put under such management.

So what the heck am I talking about in simple terms? Certificate based TLS sessions are where the client system, laptop, desktop or mobile have installed a signed digital certificate that is presented to the service during the SSL connection session. That means that the computer trying to “login” must also have this certificate to present to the service as a means of determining that the computer itself is authorized, not just the user credentials.


The illustration above shows  a typical deployment at a network operator with a Pronto! based web access method for the subscribers. In the private cloud all the users must conform to 3 added policies to “get into” the service:

  1. Password and user challenge / response is met with a biometric scan using our multi-factor API and mobile client
  2. TLS certificates must be present on the client machines and presented to the server
  3. The network that the machine is coming from must be on the list in the server policy

Furthermore the network operator has also placed several “good practice” policies on the public access network for reception and transmissions of messaging content. In many cases we are seeing that inner agency traffic, for example from the police to the justice department are required to also present TLS certificates. Adding TLS certificates on the SMTP sessions is highly recommended to “tamp down” the flames of SPAM and create policies that control what you want coming in versus the model of “cleaning up” the junk after it arrives with a open SMTP policy.

It should be stated as sometimes there is confusion about “email encryption” and if that means encrypting the mail itself or the transportation of the mail. For our purposes of this discussion we are talking about  TLS / SSL and that is all about the “transport” not the encryption of the email itself. Email encryption is performed in the CommuniGate Pro platform with a cousin technology called S/MIME that we will discuss in another blog posting.

In the hosting model we have two “doors” that we are talking about locking or having keys for the user to open. First is the “web access” or Pronto! webmail that combines all its communications over a HTTP/S session. That means we can send/receive mail, VoIP calls, and perform actions on the calendar or directory with a single socket connection; in this case HTTP/S using the TLS certificate to control whom and from what system can open the door. The other doorway we are talking about controlling is public or external to the “domain” in question. That means if in my example of the police and the justice department are on separate services, each can install the TLS certificates on their respective SMTP/S configuration profiles and lock the doors for any abuse or fraudulent attempted access.

While most of our talk is centered around the installation of TLS certificates on the access computers, we should not lightly skip over the way the user should authenticate. In the end of the day, if a computer is stolen, or accessed by an imposter, all they need is the fingers on the keyboard. Often times security breeches are performed by persons with mal-intention within the organization or on the periphery, like a partner or even staff with access for cleaning and maintenance of the facilities.

Password based authentication is only designed to determine that a password is “correct” not that the person is actually who they say they are. Biometric authentication in a multi-factor policy is the best method today for adding a layer that is far more precise, but also simple to use compared to lengthy passwords that are difficult to enter and remember for the user. CommuniGate Pro has build a simple to user and re-brand mobile client for TouchID on iOS devices and Android systems with biometrics sensor features. We can also “fall back” to secure session data code transmissions or least secure SMS code validation but strongly suggest that biometric scan policies are enforced for the most reliable and traceable security policy on your cloud service.


CommuniGate Pro PKI infrastructure

Tips for protecting the SMTP session on CommuniGate Pro 


The ever increasing opportunity for National Cloud value added services

Seems to me…… where & “with whom” you “float your ballon” is just as important as what “type” of ballon you have to fly. Translating that into the terminology of Cloud Computing; what legal rights you have based on who’s stuff (service) you are using might be more important than the type of technology you have for security. That means if I use great passwords or encryption, it might be less important than if all my “stuff” is at the end of the day controlled by legal agreements I submitted to knowingly or un-witingly.

For the purpose of this post we want to put aside technology to another discussion or topic, meaning lets chat about the benefits or ramifications of security, i.e. encryption or access controls another time.

The underlying subject of security that often times gets overlooked completely when discussions are made about cloud computing is the legal umbrella you might be walking under when using a cloud based system or service. Most of us click, and few of us read those EULA’s that come with all the popular email, chat or voice/video systems in use today. Often times these “agreements” include in one way or another the accord that “by using this service you explicitly agree that the jurisdiction controlling this usage License is xxxx”. Furthermore, many of these click-wrap agreements (for free and paid cloud services) indicate that your rights are forfeited and you should stop using that service if you do not agree to the jurisdiction.

Many telecoms and network operators have a massive opportunity in that they more than many can provide a National Cloud that has a lot of benefits for not just the public, but we also find many governmental organizations demand a local provider. I recently was speaking with a “post office” that uses one of our partner network operators for email. Kind of ironic huh? I mean mail man using email, but OK jokes aside they too must have a way to communicate electronically by inner-agency messaging systems, and want those to be “housed” inside the country domain, both physically and legally.

I have found that at root or the core of a value for many of our partners is the legal ability (licensed operator) to issue phone numbers. Over the top services have in many cases overwhelmed many operators globally. But “nationally” the potential is just as it is today with phone numbers if you think about @Internet address space that can be nationalized. Many of the weaknesses of technical limits can also be overcome when a domain is controlled, regulated and managed on a national level.

Take the example of a provider issuing internet address space on a National Cloud for email. Not only can the legal use License be placed under local laws and regulations (benefit for business owners), but security and abuse can be managed far more than un-managed public messaging services. Simple case, a user or domain is fake or sending abuse mail, it can be de-commisioned. Adding to this, the National Cloud operator can add value by certifying the origin of the mail, the contents of the mail (not having been tampered with) and much more, making email professional and far more trustworthy.

With over 200 Network Operators as partners, we have a unique visibility on the values of Unified Communications as a Service and understand what not to do, what works and what does not work. If you are a service provider and are interested to provide high value business communications in your region we have a unique way to work; as a partner relationship not a vendor/client. We listen and we adapt to your local requirements better than most.

Time to “geek out” a little on storage!

Lets take a few minutes to chat a bit about storage systems and how that is super important for a good CommuniGate Pro hosting platform deployment. One big mistake with storage we find in just about every deployment is the misunderstanding of how the usage pattern and loads of CommuniGate Pro differ from that of say a database or file server in an Enterprise deployment. With that in mind, how about we set the record straight on how we like our discs and arrays to taste and digest?

CommuniGate Pro is highly adaptable in scale; up or down. The same binary can be deployed in situations where there is nothing more than a single server and internal drives, up to a Dynamic Cluster that has multiple storage systems; some being internal to the physical machines, and some attached as shared storage, or with many arrays “attached” as logical systems that are presented to individual domain or realms. To set a target benchmark or “good example” we will use a multiple system architecture and a typical topology that is “common place” in our partner networks.

Let us begin with a case study of a regional Network Operator that has 1 million subscribers and sells broadband services to both consumers in residential deployments and also provides business subscribers broadband dedicated links. For the residential subscribers the operator provides email as a bundled service. For the Business subscribers the operator provides a richer experience business grade unified communications suite with value added services like VoIP, premium groupware style email and storage. As you can imagine the “load profiles” of the subscribers will be radically different even in the cases whereby all subscribers are using Webmail access (no IMAP/POP or SIP); the two example “group types” of services will place different requirements on your SaaS platform and as a result the storage subsystem.

Lets round it all down to some simple numbers for the purposes of this chat:

Our example network operator: “super duper broadband”

Total subscribers: 1 million

  • 900k consumer subs in a single dedicated domain “”
  • 100k business subs spread across 1,000 domains ( like “” & “”)

Profile type: medium load (90% of the subscribers are consumers that do not login all day)

Quota type: 100meg on consumers and 1gb on the business subscribers ( <30% utilization)

*Concurrent Web sessions (https + XIMSS): 70,000

SMTP inbound per hour: 800k

SMTP outbound per hour: 200k

Estimated IOPS on Backends: 7,000

Estimated Storage capacity total: 40TB

Capacity planning begins with understanding your customer and their usage patterns. For example, the consumer subscribers can radically vary in usage patterns because some people buy a ADSL line and get 5 email accounts bundled and never use them. But the accounts over time become filled with junk mail or notices eating up storage for no good reason. On the other hand, older people tend to use the email accounts provided by the operator and stick every photo and document they have into their folder tree. So when the account is “active” we might find they want and “expect” the speed and usability of an enterprise messaging solution.

Business subscriber profiles obviously will have completely different usage pattern; linked to the “work hour” & attachment penchant that users typically require as the messaging system becomes used almost like a fileserver. The email “storage and archival system” for business communications is fundamentally important to business in general and operators can find this as a “value add” offering. Finding a “weighted” profile is key; only your Network monitoring will provide useful parameters that are provided by your subscribers. There are many “optimization” techniques and policies that we provide our partners to deploy and manage more effectively the storage and load characteristics of the CommuniGate Pro SaaS delivery platform.

Perhaps one of the most commonly deployed CommuniGate Pro Dynamic Cluster “layouts” is the 4 x 3 architecture where you have 4 physical or “bare metal” servers on the front side that are using internal storage with SSD drives . These servers typically use storage for logs and do not need a large amount of capacity if you have set up log rotation in a CRON job or have systems management to deal with housekeeping. Keeping in mind that the load patterns of the Frontend servers are far more CPU intensive today in light of the fact that WebRTC sessions and https to the edge are now using SSL or other encryption techniques. On top of this are loads that anti-abuse filters, policy management rules and traffic shaping engines place on the frontend server array.

The example layout has 3 Backend servers which are the only systems in the Dynamic Cluster that have connections to the “shared storage” where the account account reside. This is usually mounted as /var/CommuniGate/ on each machine in the Backend server configuration/s. That means that the “Frontend servers” do not have access to the shared storage, and when they want to access an account they make the “request” to the Backend array and when authenticated, the Backend/s decide which server will open the account directory and give that info back to the session the frontend system is controlling (thru webmail for example).

As you can imagine the load on the backend servers will be far more on the IOPS side of the equation compared to the Frontend servers (CPU heavy) that are dealing with sessions doing authentication and/or encryption; which equates to CPU cycles. Therefore, we can also propose that when deciding on your network switches and interfaces the Backend severs should always be placed on dedicated switches and use 10G ports or fiber channel.

The above diagram shows a properly configured Dynamic Cluster network with 4 dedicated IP ranges and layers ideally on dedicated switches. The following is a sample Networking topology, but this should not replace talking with our engineers as part of any production Dynamic Cluster.

  • Public Network – This is the externally facing Network, on a routable IP block, normally with one or more IPs assigned to the Loadbalancer on the “external interface” All Frontend Servers in the Dynamic Cluster will have one Network Interface with an IP on this network and a DNS entry.
  • Inner Cluster Communications Network – This private network, using non-routable IP address blocks (such as 192.168.x.x.) is used for the cluster members to communicate with each other. No other traffic should be put on this private network. Frontend servers should have a second interface configured with an IP.
  • Management Network – This private network might be the shared LAN of the Operator or ISP NOC (Network Operations Center). This could be another non-routable Network (such as 172.16.x.x.) Each server should have another network interface configured so administrators can have access to the Administration UI or APIs for provisioning, billing, and other management duties.
    • Note: There may be times when a fifth network is used for management of the server at the OS/BIOS level. Many Sun and HP servers have a “lights out” port that can be connected to secure VPNs or terminal servers used to gain access to the machine in the cases where there are connectivity issues or the server hardware or power has failed.
  • Storage Network – This private network, with a non-routable IP block (such as 10.10.x.x) is used only by the backend servers to communicate with the shared storage system. This network should be high speed, Fibre or 10GE.

We will not dig too much deeper into the networking other than to say we want the storage LAN to be dedicated or without other traffic. We also strongly recommend that the storage network be 10g and use SSD’s whenever possible. NAS has advantages both economically and for a reduction in complexity as we do not use file locks in the Dynamic Cluster and our performance is orders of magnitude better than most NFS applications that need the logic on the filesystem.

Back to our case study reference point and we have arrived finally at the point to talk about what new toys for the geeks to rack up. When thinking about the 900k subscribers it might be totally reasonable to use a “spindle based” NAS solution, while for our business subscribers we put all those domains and their storage on a SSD based rack. CommuniGate Pro has the ability to move  domains or accounts to Logical arrays that are mounted as specified in a preferences file in the directory of the domain.

For spindle based NAS systems we find a few things that will often “trip up” the purchasing or specifications. RAID level 5 should always be avoided, and when possible we like RAID 1+0 (a.k.a RAID10) and yes this will double the “spindle count”.  Nevertheless the stripping over a rack of drives gives us the IOPS we want; more economically. Often times a storage vendor will view CommuniGate Pro like an enterprise load and not have a good picture of how the Dynamic Cluster operates. In fact they might be “honestly” looking to save some coin by using RAID levels that boost capacity.

Another area that is important in the specifications of a large NAS system with many spindles is to choose disc that are the fastest and not the largest. If the target is x on size or “Terabyte capacity” having 4 LUNS of drives, where each is say a cabinet of 24 drives is far better than discs of double capacity and only having 2 cabinets or 48 drives versus 96.

Lets say then we have a NAS head on 4 cabinets of drives or 96 usable spindles. Using an LVM we span the LUNS into a single volume and present that to CommuniGate Pro for /var/CommuniGate and stripe over all the drives in RAID10 to achieve the best performance in IOPS whereby we exceed what would be possible in other configurations.

Our business subscribers will get a link to the optimal storage array when their domain is created. Meaning the domain is created that will send their profile over to a SSD based storage system that can have other value added services like backups, archival rotation and encryption policies.

A properly configured Dynamic Cluster has the potential for 100% uptime; we have dozens of examples whereby the operator partner tells us that CommuniGate Pro is the longest running and most stable app in the data center. In some cases the only “major upgrades” to the system are a result of the hardware vendors EOL or phasing out support for the servers. Several of our partners have Dynamic Clusters up for 8+ years non-stop. That being said, another good tip for your architecture is to plan your change management and how to deal with load sharing during spikes and peaks.

The CommuniGate Pro Dynamic Cluster has a “rolling updates” mechanism that allows software or hardware to be serviced or swapped with no downtime. In our Dynamic Cluster the “server systems” or “cluster members” can be updated one by one when the administrator takes the cluster member into the “non-ready mode”. The cluster member when put into this “non-ready state” stops taking new connections and can be upgraded or the hardware can be changed entirely. In addition, you can easily add in new cluster members to deal with loads, or switch operating systems and hardware vendors. It is possible to run mixed systems; such as FreeBSD Frontends and Linux or Solaris or AIX Backends.

One thing to remember is when we take a cluster member offline for service, you are removing the load capacity that member provides to the entire Dynamic Cluster subscriber base. In our 4 x 3 example the backends are each providing 33% load and when you place one Backend member into “non-ready” mode, the remaining systems are now in a 50/50 load situation. These Backend server members should be designed to deal with that load in maintenance or hardware failure situations. It is far better to have “more cluster members” versus having big servers that deal with a large % of load. My rule of thumb is to never have more than 33% load on any cluster member, and that is “peak” load, not nominal operating parameters.


IOZONE tools for checking IOPS 

Linux FIO tool description

Performance on Linux in virtualized environments

CommuniGate Pro benchmark with IBM

CommuniGate Pro SpecMail results

SpecMail benchmark topology 

The EV might be the French answer to clean air compliance

One really “welcome” aspect for climate change here in France is “how we will meet the goals” of our climate change commitments when most of our greenhouse gas emissions come from our cars & scooters, not power production as would be the case in most countries. That means if we are going to do our part, we will have to cut the use of petroleum based transport systems massively!  To even put a “small” dent into the pollution, whether you count that pollution as CO2 or other toxic emissions, our cars will need to be electrified and/or shared in some new “type” of system. This, in and of itself, is one of the most encouraging things for me as it definitely means the EV will have to be taken seriously.

Unlike most of the countries in the EU, and even I would dare say the world, almost all of the cities in France are plagued with air-toxicity levels that are directly linked to tailpipes, not smokestacks. Exacerbating the problem is noise pollution, road rage linked to over-saturation of vehicles, and of course our favorite daily activity, the battle for parking that is simply nonexistent. All of these things cry out for a better understanding of how EV technology is well suited for people that live inside a city or its perimiter; a topology that would not have to fret over EV range or charging time constraints.

Moving to an electric based transportation system for inner-cities is simply common practical sense; yet embracing adoption by consumers is maddeningly slow moving. We indeed have installed many stations for charging EV’s in most of the cities around France; but alas many of these sit empty and/or under-utilized.

One of the things you learn by living in France is how much we are an “electric” society. Everything in your daily life from the cooktop, oven, heating and even the BBQ is electric! It took me a while to adjust to an electric BBQ on the poolside being from California, and even longer to explain that it is common place to my family and friends back state side. So, if we are so entrenched in electricity and the prices are reasonable and the generation is “clean power” what are we waiting for?

If you happen to live in a city, you know that most of the city is cluttered with parked cars doing nothing. You also know that finding a parking place is a skill that one has to acquire by living in that city for a while and learning the secrets. Adding to the equation is the “cost” of parking; which is either quantifiable by excruciating painful frustration in life, or monetary dispersement thru fees and of course fines. I am of the belief, having owned an EV for nearly 4 years, that parking is one of the “golden keys” to attracting or adoption of EV ownership. It was the sole reason I purchased my initial EV and remains a big motivational factor in the ownership of my car today as I can charge cheaply or for free in many shopping centers and public places.

Advocation does not only have to come thru generous “subsidization” of ownership per se. I do believe during the nascent phase of the evolution of the Electric Vehicle ecosystem, some rebates are a matter of due course. However, I think that we could do far more by simply adjusting how we look at prioritization of parking for EVs and some stronger regulation on dirty cars entering the densely populated urban area as have become common practice in Paris, Munich and NYC.

Many of our EV charging stations around Nice in the south of France are packed on any given Saturday; but before you get excited that EV use has taken a foothold, have a closer look at the photo. Those are not EV’s in the slots……..I typically find 9 times out of 10 the charging stations are filled or blocked with diesel cars because they the parking spaces are  too irresistible for drivers searching for a space to pass up. The EV stations are clearly marked in bright green color with big clearance space to maneuver, they are typically clean (no oil drops) and usually close to the entries of buildings, just like dedicated handicap parking spaces. Yet, when I have asked drivers why they park in these EV spaces and not the handicap? they often tell me because in the handicap spots you can get a ticket or have your car towed!

So, while enforcement sounds great, it is a resource that costs and is difficult to monitor. I am of the mind that if we had more EVs on the road and in the stations, they in some ways would be “self-policed” and would become more valued when utilized. That does not mean I think we should ignore abuse, not at all. I actually think the fines are the wrong way to go and it would be better to hit the true “pain point” of drivers; the penalty points on your drivers license.

Fact: Far less than .05% of the cars in the inner-cities are EVs on the road and we probably need someplace around 40% to change the toxicity levels in any meaningful way amounting to meet our goals. Another frustrating phenomenon is that EVs are even under attack form ecological minded people themselves! Many ECO-techies confront me when I park to charge and say my EV makes more Nuclear power usage! For some, on the side of “green energy” it is far better to have diesel and move our Nuclear power to say gas or worse coal (like in Germany); brilliant idea!

Even with the decommissioning of Nuclear based power plants and adding in energy production from solar, thermal, and wind generation systems, we have a zero effect on greenhouse gas emissions in those swaps. Thus, we will need to slash our petroleum based transport significantly to meet any objectives we have committed to.

It will be interesting to watch how we deal with our pollution problems here in France and if in fact the Electric Vehicle will be the golden goose or the dead duck. We have signed up for a hefty amount of change, in terms of cutting our emissions and if we hold to that agreement, this obviously means the diesel is going to be on some presentation deck, in some meeting room, being shown to some political  leadership as the main root that has to be pulled from the dirt it is based upon.


EV sales and use statistics:





The Grand Prix goes Electric


The geeks in us had to check out the 2nd “electric Grand Prix” this year in Monaco to see how technology has transformed the way cars race. Indeed the organization has transformed the way we think about EV’s from being a “cheap toy” type of car that is not strong or powerful enough to be in a “real race” into not just a contender, but perhaps the future of racing!

For most of us geeks that detest traffic jams and screaming road rage drivers, we eagerly welcome AI taking over the drivers seat. What was really interesting to learn in the ePrix 2017 event was the potential for IoT and neural network technology that is already being leveraged to squeak out seconds of time in a race. Many of us geeks know that Lidar and camera based “sensors” in EV’s that are indeed better than our “biological sensors” in the sense that they can detect “stuff” in longer or wider ranges than our eyes, ears or even nose. But, the “mesh” of data and processing it all to make decisions on “driving behaviors” and changes is evident in the horizon. That is really exciting when you consider cars will soon communicate among themselves and make decisions on their own, based on inputs from data they collect from other cars, systems like “waze”, IoT sensors in the road itself, or satellites (GPS) that will enable EV’s to cruise thru traffic 100x more efficiently with no road rage. Well at least until AI gets feelings!

The technology in the ePRIX cars is nothing short of fascinating when you have a look in the cockpit and dream about the forces pushing you back in the seat, or pulling you thru a curve. The HUD info connected back to the network is really tricked out cool and you can check out some Youtube videos like the following to get an idea:

One of the things you “become” in owning an EV today is like some kind of “spokesperson” as people stop you in the street to ask about the electric car at charing points. EV adoption is still in the nascent stages; I mean the industry of EV’s is still a transition from toxic air producing vehicles to systems that are probably a bit better inside cities especially. But adding to the entire “eco-friendly” aspects of the EV is the technology that is just flat out “cool”; especially for kids! And who is nothing short of a “kid” than a adult geek playing with cars?

We spent some time at the e-village and learned how the cars work, and the kids in all of us got to try out some of the BMW cars and examples of technology.

So the question that came to us a few times was “are they fast”? We geeks can assure you the Tesla “ludicrous”  button gives a lot of inspiration, but the throttle on the ePrix cars is one I would love to push!


Learning how to make the future do what you want

The best way to predict the future is to be a participant in the creation of what comes next. Nevertheless, we can learn a lot from the past, and how the “old guard” sometimes manages the new. If you were around in the Dotcom days, we simply did not have a reference model to look at for technical tools. In fact, everything was totally new and un-expected in the business world and change came fast, hard and “all of a sudden” it was just “like that”.

Today Unified Communications is quickly coming to the Contact Center. Yes, for the statisticians in all of us, we still cling to the phone as the “reference model”. But, we should not forget how quickly change comes. Have you talked to anyone recently from say 13 to 19 years old? Ask them what they think about using a phone to talk to somebody, or what an email address is used for other than setting up some account or giving to the cashier at the store when they press you for it. Putting the Contact Center into the hands of the consumer on the most prolific entity currently known makes not only good business sense, but is perhaps yesterdays news.

While it is true that the Contact Center does have telephonic media channels that are still considered “king of the hill” we can learn a lot about the behaviors of the market by looking at communications trends. When we were in Silicon Valley in the late 90’s, for us email and portable phones were our standard. Yet, anyone will tell you that in those days we did not have “40+ year old” executives that had this in their blood because they simply never grew up around technology as we did.

If you took funding off Sansome street in Palo Alto, the first thing they would do is send you a pack of suits (advisors) to join the management team. Well, these guys (and some girls) would show up with pencils and big thick note pads (looking like a bounded book) and scribble all day and look at us in scorn….. and even in some cases ban email usage in the meeting rooms! For the “suits” the method of working entailed meetings, discussions of SAP, and “processes” that were mind boggling. But, our work, over email and sometimes from home or in the car was scandalous.

FACT: today the amount of phone traffic has plummeted in contrast to the traffic over chat or images shared as a “message”. When the phone rings in my house and I look at my kids to pick it up; they say “it cannot be for me as my friends would never use the phone”. As bizarre as it was for the “suits”, because my parents could not keep us off the phone. So, if they will be taking over the old guard (us) in short order, does it not mean as they communicate to the support, billing or purchasing department their methods and demands will require change?



Working with our partners in the network operator segment we have developed Unified Communications for Contact Centers. Our unique focus on mobile loyalty application and WebRTC  that works for todays media choices of voice and email, yet also has chat, video and Biometrics that makes authentication “a touch” not “a snap” of the finger easy! Partner with us today to build a Cloud based Contact Center for your region and be a part of the future not the history.

Great partners make life more fun

For over 17 years, our partner Mauritius Telecom has been delivering the cutting edge Unified Communications services. Together with the CommuniGate Pro multi-tenant platform our partnership has evolved into a core B2B offering for subscribers. At CGS we have adapted to market changes and waves of innovation over the years; but one thing that has never changed is that we keep our partners close and use their insight for the development of the product roadmap. This week we spent some time with our colleagues and partner learning how their National Cloud will be a leading example of Unified Communications in the Cloud with regulatory compliance demands met.

Of course at an average 32c in the day and 30C at night we had to dress light and carry small bags!

Sometimes geeks take things too far and bring their briefcase to the beach thinking maybe there are great wifi signals around!

What does one do when in Mauritius Island with good friends and colleagues? Diving of course!

Searching for a Cloud near Nemo….

Alcatraz never looked so pretty; but defiantly bring a boat with you either way!

At CommuniGate Systems we like to keep in touch with our partners and take a close look on how our platform benefits their business, always looking for ways to strengthen our relationships and develop even further.

Stay tuned to see who we will visit next!

2017 – the year the Cloud becomes a National asset

This last year (2016) we saw a global surge and race to move business applications to the Cloud with new development models like dockers, containers and all sorts of new languages and frameworks. Plus, we also began to see businesses starting to realize, and take advantage of the movement to WebRTC capable systems; such as Contact Center products! However, in the last year we had no peace when it came to hacks, attacks, and large scale “outages” that wrecked havoc and made us soberly realize growing pains will trip us up here and there.

Security risks and governance of the Cloud model has been a controversial area for the FinTech and healthcare market segments by the very nature of their preceding regulatory nature. However “the move” to the Cloud has extended the “call for control”, regulation and of course protection of IPR and trade secrets for all of us; even the local pizza delivery shop does not want their website hacked and pizzas delivered to wrong places by the hands of pranksters.

One of the fastest growing demands for technology platforms  like CommuniGate Pro which supports the Cloud or SaaS provider market segment models is a drive towards a “National Cloud” model.  For many of the countries in the EU, Africa and the Middle East regions there is a “slam on the brakes” sense of urgency to control the move to the Cloud. This “reaction” is somewhat based on “justifiable” response to the control of content and/or property rights dealing with the “physical” location and extension of laws governing everything from metadata to “just being in the wrong place at the wrong time”.  I had a client whom had their systems taken offline simply because their “portion” of the “shared” architecture happened to reside on servers and systems that were under investigation for illicit behaviors by a totally un-related customer “sharing” such systems on the providers data center in the USA whilst this particular company was based in Belgium. Once the systems were “seized” there was no “recourse” for this “collateral victim” that fell into a hole of darkness (website offline) simply because they were “situated” next to a “bad neighbor”.

Our view is that we rationally expect most countries in 2017 will begin to see that it must own, control, and often times regulate the private versions of a Cloud within their country to mitigate risks. This will start with industries that are regulated, such as previously mentioned banks or healthcare systems, but that move will quickly encompass any business that wants to be under the umbrella of National laws and not the hosting providers country of origin laws. Of course, on the positive side we see and believe that the exploitation of these regulations are morphing to the values of having a National Cloud as an asset.


Building stuff is fun, but modifying to suit the need is smarter

There is clearly value in building applications that are “built to suit”. The purpose and manner of your operations is what makes you who you are as a company and normally that drives your values and competitiveness. Also, customer needs are intrinsically understanding these matched with the relationship they have with you over-ride trying to bend your organization around the way some application or system works.

We recently visited the BMW Werk in Munich and one of the things we were “pressing” our guide about was the technology in use with all those cool robotic systems. We learned that the logic and the software was custom made and tuned initially by the Swiss vendor many years ago, but then BMW formed internal competence to write their own algorithms based on their specific requirements. Contact Centers in large deployments, say like a mobile network operator, can have 100’s of centers doing all sorts of tasks: from provisioning, billing, and support to sales activities, logistics for maintenance crews, and even internal HR and travel divisions. These systems were grown into the company requirements. But that has always been costly, yet necessary in that the needs of the operation are not generic enough to have out of the box delight.

I remember my days in corporate IT and indeed one of the downsides to turn-key applications like a CRM or ERP was you had to mold the staff to that vendors methods of “workflow”. Obviously there was a need to get closer to building vs. “assimilating” and the entire spectrum of software evolution ran for years now around all sorts of buzz word compliancy themes like “mash ups” & “plug-ins” or “widgets” & “integrations”.  Today we are still left deciding how much to get our hands “sticky” with when it comes to the next great API and preference panel du jour.

Today we have an amazing set of technologies available that no longer require a massive truckload of IT junk to get started. The advent of the Cloud model for delivery of applications by vendors removes a lot of the back end hassles that made adoption or migrations nightmarish. However, we believe that applications must “suit” the need and that means there should be available at all times some skin in the game from the consumption side to mold either the applications themselves or participate in the roadmap decision tree for us as the software app creator of the platform to get involved.

CommuniGate Pro was designed to be a hosting platform at the core and from day 1 with the right “mentality” for Cloud services no matter if you are the platform hosting company to the end consumption client. The technology is multi-tenant and has a full development SDK and set of APIs that allow the best of the best geeks to have fun, but also provide revenue stream possibilities for customization services that make Cloud providers more than resellers, but rather have intrinsic value thru professional service delivery.

Everyone has their “secret sauce” and style as a product company. Since our inception we have always been focused on the architecture of the product that would deliver the best performance reliably. After 25 years of developing software we have not budged one inch on that philosophy for better or worse. Many times we have had major internal debates to acquiesce a bit on that stance in order to develop some new capability or feature. Time and time again that has fell not on silent ears but firm adherence to our value.

CommuniGate Pro is developed in C++ and is a signal multi-threded package standing as not only the single model in such a design in the market place perl group, but we also have the un-contested performance spec awards to support our decisions on design. Unified Communications really is all about having a single backend that can extend protocols to support multi-medial communications that are understood on the core platform without submission to separate systems or servers. Our design also is all about “Dynamic Clustering” that has “all active” members breaking the mentality of failover or “passive” designs.

This unique capability and design has brought me to hear from 100’s of data center teams over the years the same message: “it is the only system on the core network that has never been down”. In the markets we serve, having a system with up times running over 8+ years is not only cool, but sensibly just for operations that are supposed to be 24×7 non-stop. Leveraging processor affinity we have been able to eclipse all known products in similar spectrums on todays multi-core server hardware up or down to the level of RaspberryPI.

With the move to less infrastructure and more ease of use the CommuniGate Pro platform is uniquely ideal for the Cloud hosting provider to developers that want to bring IPR to the market for Contact Center deployments or proved web and mobile Unified Communications to business processes without boatloads of bloatware. Join our ecosystem today as a certified solutions provider by coming to one of our training and certification courses in 2017.