Information: Distributed, not Distracted

Although before we were able to identify with certainty where did the information we got came from, it is now more difficult to determine its origin. We have gone from being a ‘data consumer’ society to an ‘information producer’ society. In the past it was a matter of reading the headlines of the press or the radio to know who had said what, and internally we could apply a selection criterion to know if we accepted or rejected the information we received. But as technology has allowed us to do so, we are daily ‘bombarded’ by knowledge that comes from multiple sources, but it doesn’t mean we should discard or simply ‘lock ourselves’.

In spite of the negative side of the previous paragraph, it has been extremely positive that we can count every day with more data sources. This has allowed us a constant feedback in our daily lives, it has proven to be a support in our personal, professional and working development, as well as it let us clear the goals we have for short, mid or long term. It is only matter of preparing ourselves to receive this information, how to respond to it, mentally set a ‘contingency plan’ whenever it is required.

Considering these previous facts and that Informatics has evolved in the same way the human society has done in this sense (constantly more data sources), it has been necessary to develop elements that let us to receive and distribute the information keeping us abstracted from its generation, to create communication interfaces APIs (Application Programming Interfaces). This need has been reflected in the generation of multiple APIs services in the market, having Amazon API Gateway as a very attractive option for the software development teams.

Many times to program an API involves the election of a development platform ‘A’, to define a ‘B’ strategy for changes control, use a ‘C’ service to control its use, implement a ‘D’ policy to reinforce its security and, likely, to create an ‘E’ system that lets to respond to alarms for the API usage. When in Morris & Opazo we found that Amazon API Gateway responded well enough to each one of these ‘letters required’, we wanted to dive deeper in its use and the abilities it was to offer to ongoing projects.

It was only by the learning, set up, prototypes development and implementation in real projects, that we were able to verify by ourselves that Amazon API Gateway really fulfilled the generated expectations, as well as providing answers to previously asked questions. With Amazon API Gateway, data can become into useful information for the company, with high availability and ability to adapt to different requirements.

Data don’t come from a single path and is not directed to a single route, but it doesn’t mean that it can be lost or ‘distracted’ of its objective: to become into useful information for the company, with high availability, and willing to adapt to the constant changes the world now experiences.

“In Morris & Opazo we have found requirements very precise in their data origins, but at the same time they could not stay ‘frozen in time’, we had to be ready to easily adapt the current systems to future data origins. For these purposes we have used Amazon API Gateway, because it not only let us to develop the APIs needed now, but also lets us to carry a detailed control of their use, the potential costs for our clients, and makes easier for us to keep the APIs versions”. (Cristian Pereira, Senior Project Manager, Morris & Opazo)

AWS Partner
Microsoft Partner





Bitnami