¡Últimas horas! Disfruta todo 1 año de Premium al 25% de dto ¡LO QUIERO!
Predix.io - Podcast
Podcast

Predix.io - Podcast

17
0

Predix is the operating system for the Industrial Internet, powering digital industrial businesses that drive the global economy. By connecting industrial equipment, analyzing data, and delivering real-time insights, Predix-based apps are unleashing new levels of performance of both GE and non-GE assets. Subscribe to listen to subject matter experts and how they are leveraging Predix with the latest technologies.

Predix is the operating system for the Industrial Internet, powering digital industrial businesses that drive the global economy. By connecting industrial equipment, analyzing data, and delivering real-time insights, Predix-based apps are unleashing new levels of performance of both GE and non-GE assets. Subscribe to listen to subject matter experts and how they are leveraging Predix with the latest technologies.

17
0

IoT Goes Industrial - Episode 13

Predix does blockchain? Yes! In this episode, we hear from our two blockchain experts, Vineet Banga and Atul Kshirsagar, who will explain how you can use this service to ensure integrity, security, and traceability for your IIoT app.
Internet and technology 7 years
0
0
6
23:07

IoT Goes Industrial - Episode 12

Jeremey Osterhoudt, Senior Staff Engineer on our Predix Mobile Team reviews our newest Predix SDK for iOS written in Apple’s Swift programming language Use this SDK to unlock rich APIs and user interface elements, connect to back-end services, access data offline, leverage the latest iOS technology, and incorporate the best design practices.
Internet and technology 8 years
0
0
6
18:35

IoT Goes Industrial - Episode 11

In this episode, we get to meet Aileen Hackett, Technical Product Manager, and Ken Skistimas, Director of UX for the Predix Design System. They will review our Design System and give you some great insight on how leverage its UX capabilities and tooling to build world-class Industrial Internet Apps.
Internet and technology 8 years
0
0
5
25:04

IoT Goes Industrial - Episode 10

Hear from Predix Developer Evangelist Jayson DeLancey on the new Predix Python SDK and the related work he did on the connected volcano. Volcano connected to the Predix cloud? Yes!
Internet and technology 8 years
0
0
6
28:41

IoT Goes Industrial - Episode 9

In this episode, Predix Engineer and Predix Builder Influencer Grant Griffiths discusses the Go Programming Language, it's advantages, why it strongly aligns with Industrial IoT apps, and why you should consider using this language for your Predix projects
Internet and technology 8 years
0
0
5
24:24

IoT Goes Industrial - Episode 8

The App Composer Service helps you rapidly construct prototypes and production applications that leverage Predix and other native microservices. In this episode, Alex Aminan, COO, explains how Predix developers can eliminate long, laborious, and error-prone software development cycles with this Service.
Internet and technology 9 years
0
0
6
29:35

IoT Goes Industrial - Episode 7

In this episode, Phill Ramey, from Azuqua, discusses the Workflow tile and how developers can leverage this service to accelerate their app development activities. The service helps you quic2kly integrate data and automate processes across devices, machines, and locations.
Internet and technology 9 years
0
0
7
22:01

IoT Goes Industrial - Episode 6

We just dug this one out of our archives. In this episode, whe hear from Jon Zucker, Predix Developer Community manager on what he's currently doing and planning for the global Predix developer community. Although this was recorded nearly a year ago, all still holds true today and even some of Jon's visions for the community have become a reality.
Internet and technology 9 years
0
0
5
22:51

IoT Goes Industrial - Episode 5

During this Episode, Ravi Karra shares his thoughts on the IoT, cloud development, and best practices on how to architect a monolithic app that you want to move to Predix
Internet and technology 9 years
0
0
2
27:31

IoT Goes Industrial - Episode 4

Siva Balan - one of our star podcasters and developer on the Predix team - discusses what Performance Engineering is and why it's so critical to apply for building cloud based apps on Predix.
Internet and technology 9 years
0
0
0
30:43

All Things Performance - Episode 4

The basics of JVM tuning when deploying Java apps to Cloud Foundry environment.
Internet and technology 9 years
0
0
0
07:00

All Things Performance - Episode 3

Introduction Hello and welcome to the third edition of “All things Perf” podcast. I am Siva Balan, Sr. Staff Performance engineer with the Predix Application Services Engineering team based out of the Software CoE in San Ramon, CA. In the 2 previous episodes of the podcasts I talked about the Performance framework tech stack and about the performance testing process adopted by the Predix Application Services Engineering team. In this episode of the podcast, I will discuss about a very popular containerization technology called “Docker” and how we use it in our tech stack. What is Docker? Many of you may have heard and have even used the containerization technology called Docker. I am just going to give a brief introduction of it here and for more in-depth understanding, the world wide web has a treasure trove of information. In simple terms, Docker allows you to package an application (like jMeter, Jenkins, ELK stack or any application that runs on a Linux platform) with all of its dependencies (like JDK, Python, C Compilers, etc.) into a standardized unit for software development. Docker container wrap up a piece of software in a complete file system that contains everything it needs to run: code, runtime, system tools, system libraries - anything you can install on a server. This guarantees that it will always run the same, regardless of the environment (on premise, cloud, hybrid) it is running on. If you wish to learn more about the Docker technology, a simple search on the Internet should suffice. Why use Docker? We always have a choice of running the performance testing tools in any platform of choice. When developing a common framework that will be used by many different teams across geographical regions, it always helps to standardize the environment in which these tools will run. That is where Docker comes in. It helps to standardize the testing environment with required dependencies and versioning and makes sure it runs the same no matter where it is deployed. We will see in more detail on how this can be achieved. Another advantage of using Docker is that it is easy to scale the deployment if load requirement increases. All we need to do is to spin up more Docker containers on the same VM or on different VMs. No need to install or configure the tooling framework. It also gives you the flexibility to easily integrate performance testing with CI tools like Jenkins and newer DevOps services that are being deployed to CF. More about that in another podcast. How to use Docker? We have seen in earlier podcasts that we use JMeter as a tool of choice for Performance testing. Now, let us look at how we can make use of the Docker containerization technology to create an image of JMeter. The first step is to create a base Docker image from which we can build images for JMeter. Once you have the base image, you can then create a JMeter server image and a JMeter client image. The server image will run JMeter in server mode acting as a master to collect the logs and monitor the health of the slaves. The client images will run JMeter in a client mode and will be the slaves actually generating the load. When starting the JMeter client Docker image, it can be configured to run standalone or in a master-slave configuration. When running standalone, it can be wired to persist the results in an ELK stack using Logstash filters. We have a working prototype of it in the performance lab that we actively use for performance testing Predix application services. Does Docker add a layer of complexity? How do we know that we are not adding an additional layer of complexity to the already complex performance tooling stack? Well, the initial learning curve of adopting Docker is slightly steep. But once that is overcome, it is one of the most flexible ways to deploy performance tooling to various environments. It also makes subsequent deployments painless and easy. Even though JMeter runs within a Docker container, the speed at which it starts the load generation is amazing. Sometimes I have seen it running faster than if you would run it natively on the VM (outside the Docker container). There is a thin layer of network and resource separation which the containerization technology brings with it and that works to our advantage when scaling out the JMeter instances. Is Docker mandatory and its use cases? No, it is not. It is just another way to deploy the performance test framework tools and it streamlines the process as you do it. Docker does have its own shortcomings. For one, it cannot run on Windows platform and to test it locally on your Mac, you have to run it on a Linux image using Virtual Box. The Docker solution is more suited for cloud-based deployments (like AWS) where your app resides inside of Cloud Foundry or deployed directly on AWS. Docker runs natively on Linux OS and the cloud platform is a perfect use case for it. It can also run very well on on-prem deployments of Linux. JMeter on docker is just one use case of Docker technology. This can be used for any number of applications. Are there other technologies like Docker? Yes, there are a few. Rocket is one such containerization technology from a company called CoreOS. Cloud Foundry built their own container technology called Wardens on which they deploy and run their apps. But Docker seems to be the most popular one with a lot of community support and widespread adoption. Conclusion Docker is still an evolving technology and it is gaining momentum at a rapid pace. The DevOps as a service team is using Docker containers to run Jenkins builds. More and more applications are realizing the power of containerization technology and trying to see how best they can utilize it. We have just scratched the surface with this approach. There is a whole lot of fun stuff we could do with this technology. Thanks for listening and I hope you all enjoyed this podcast episode. We would love to have your feedback on this podcast and your suggestions on topics in performance engineering you would like to hear more about. You can reach me at balan@ge.com with your comments and suggestions. Until next time, this is Siva Balan, signing off from San Ramon, CA. Thank you.
Internet and technology 9 years
0
0
0
07:13

All Things Performance - Episode 2

All Things Perf Episode 2: Performance engineering and testing for micro services 6:35 mins This is the second in a series of podcasts aimed at disseminating the tools, technologies and techniques adopted by the Predix Application Services Engineering Performance Engineering team to a wider audience. If you have missed the first episode, you can find it on the Predix resources page as an audio file and blog post. This second episode discusses on how we try to use performance testing as a checkpoint in the CI/CD pipeline. Please listen to or read it and provide your feedback on the content and future topics of interest. Introduction Hello and welcome to the second edition of “All things Perf” podcast. I am Siva Balan, Sr. Staff Performance engineer with the Predix Application Services Engineering team based out of the Software CoE in San Ramon, CA. This podcast series will take a stab at explaining how we approach performance engineering and testing of various micro services being developed as part of Predix. In the first edition of the podcast, we discussed about technology choices for Performance engineering and testing of Predix micro services. If you have not heard that yet, I would highly encourage you to listen to it. This second episode in the series will focus on how we will be using performance testing as a checkpoint for a particular micro service in the Predix Application Services suite. We will discuss how we introduce non-functional testing as a requirement in the CI/CD pipeline for a build to be promoted to Production deployment. The steps in CI/CD pipeline The CI/CD process consists of 2 main components. Continuous Integration and Continuous Deployment. First comes the CI process. The CI process kicks in as soon as a developer checks-in a functional piece of code to a branch where a CI tool like Jenkins or Atlassian Bamboo is monitoring for any new check-ins. In our case, we will use Jenkins as the CI tool of choice. Jenkins checks-out the code from the source control repository, which in our case will be GitHub, and it builds it as a first step. It then runs static code analysis, unit tests and code coverage reports and if everything passes, it marks that build as successful and pushes the build to the artifactory. The next step in the process is the integration tests. The integration tests are run by Jenkins using the artifact from artifactory from the unit tests, making sure all the tests are successful and the build is now ready to go through non-functional testing. Performance testing will act as a bridge between CI and CD. Only if the non-functional tests are successful, the build will be promoted to the CD pipeline that will deliver it to Production. We will talk about the CD pipeline in another episode of the podcast. We will focus on how we will use performance testing as a deciding factor for a build to be deployed to Production. Performance testing process Once the build has passed integration tests, it is now ready to go through non-functional testing. This involves the micro service to be deployed to the Cloud Foundry environment, bound to any databases, messaging and caching services, bound to logging and monitoring services and then tested for non-functional requirements. We also need to load any testing data before the actual tests can be run. Once the environment is ready for performance testing, we typically run 3 types of tests. The first is the Capacity test to determine that the application is performing optimally given a certain set of workload on 1 JVM instance. The second is a Scalability test to determine how the application scales on multiple JVM instances given the increase in the workload. The third and last one is an Endurance test that can run for several hours or more to identify any resource leaks. We want to ideally run all three tests for every build and make sure all SLAs are being satisfied before it is deployed to Production. So let's now see how we can automate this process of performance testing a build and mark it as Production ready. Test automation With a combination of Jenkins and some shell scripting, it should be possible to automate the performance testing process. Once the integration tests are done and a successful build is pushed into artifactory, that artifact will be used for performance testing. A Jenkins job will first download that build from the artifactory and push it a performance testing space in Cloud Foundry with custom manifest.yml file specifically tailored for non-functional testing. Then comes the process of binding this pushed micro service to specific DB, messaging and caching services and loading performance test data as necessary. Then this micro service will also be bound to performance test specific logging and monitoring services. As discussed in previous episode, we will be using ELK stack for logging service and new relic for monitoring service. One thing to keep in mind is that at the start of every performance test, the database, messaging and caching service will not be re-used from previous tests, as we will be loading fresh test data for every test. But the logging and monitoring services will be re-used, as we want to maintain a history of logs and resource utilization metrics for comparison purposes. So when we bind services at the start of the test, we need to pay attention to what services need to be recreated and what need to be re-used from previous tests. Once the environment is prepared for testing, we are now ready to start the tests. We can either run all 3 types of tests in parallel or one after the other. Given that we are running tests in a highly scalable Cloud Foundry environment, we should be able to run all 3 types of tests in parallel. We will use Jenkins to kick off the tests and monitor them. Once the tests are done, we will have some custom scripts to evaluate the SLAs of each test and if all SLAs are satisfied, we will mark the tests as passed and the build will be marked to be ready for Production deployment. Even if one of the 3 tests fails the SLA, the build will be marked as failed and will require further evaluation. This process will ensure that only well-tested builds are pushed to Production and if they fail any of the non-functional tests, it should not be deployed to Production. The CD process will not deploy a build to Production unless the non-functional test Jenkins job marks the build as successful. Conclusion I hope that gave you some ideas on how non-functional testing can be used as a checkpoint in your CI/CD pipeline when deploying your apps or micro services in Cloud Foundry. There are different ways to do this but the key takeaway should be that your apps or micro services should never be deployed to Production unless it has gone through rigorous non-functional testing. It will save you a lot of nights and weekends on calls and pager-duty if non-functional testing is part of your CI/CD pipeline. Thanks for listening and I hope you all enjoyed this podcast episode. We would love to have your feedback on this podcast and your suggestions on topics in performance engineering you would like to hear more about. You can reach me at balan@ge.com with your comments and suggestions. Until next time, this is Siva Balan, signing off from San Ramon, CA. Thank you.
Internet and technology 9 years
0
0
0
07:31

All Things Performance - Episode 1

All Things Perf Episode 1: Technology choices for performance engineering of Predix Application Services 6:35 mins This is the first in a series of podcasts aimed at disseminating the tools, technologies and techniques adopted by the Predix Application Services Engineering Performance Engineering team to a wider audience. Please listen to it and provide your feedback on the content and future topics of interest. Introduction Hello and welcome to the first edition of “All things Perf” podcast. I am Siva Balan, Sr. Staff Performance engineer with the Predix Application Services Engineering team based out of the Software CoE in San Ramon CA. This podcast series will take a stab at explaining how we approach performance engineering and testing of various micro services being developed as part of Predix. These micro services will serve as the foundation building blocks for the Industrial Internet platform. Each podcast in this series will be about 5-10 minutes long covering a particular topic. If you are interested in more details on any particular topic, feel free to reach out to me at balan@ge.com and I can give you more details or make another podcast if there is sufficient interest from the audience. This episode of the series will focus on the technology choices for performance engineering of Predix Application Services. As you may or may not be aware, Predix is being built on the Cloud Foundry “Platform as a Service” architecture. I am not going to go into the details of the cloud foundry architecture, which you can read up on the world wide web. It also heavily relies on the Continuous Integration/Continuous Deployment model (which will be referred to as CI/CD pipeline) of pushing new versions and changes to Production. This shift also gave us an opportunity to rethink how we want to approach performance engineering of these micro services. Technology choices In order for us to keep up with the CI/CD deployment model, we have to make the choice of tools that will fit well with this model. After various PoCs, we settled on the following choices: Apache JMeter™ (JMeter) as the performance testing tool to generate load. New Relic as the monitoring tool to monitor JVM and resource utilization during performance tests. ElasticSearch/Logstash/Kibana stack to store and visualize the performance test results. Let us look at each one of them in detail and provide some context as to why we made those choices: JMeter Up until now, we were using HP Loadrunner as the performance testing tool for the Predix platform. Because we needed additional capabilities to adapt to our changing technologies and deployment model, we evaluated other leading performance testing tools and selected JMeter. JMeter was the most adaptable solution for us and there was a robust open source community support behind it if needed. It also had a very nice plugin capability to extend any functionality that we see as important for our performance testing stack. In fact the performance team has already created some extensions using this JMeter plugin capability for some key functionality that we thought would help us in collecting the results and persisting in ElasticSearch from various JMeter tests running simultaneously. I will talk more about that in another episode of the podcast. So all these led us to narrow down to JMeter as the choice tool for performance load testing. New Relic Because we made the decision to move away from HP Loadrunner as our testing tool, that also forced us to look for a new monitoring tool as we could no more use HP Sitescope for monitoring. Also the fact that we are now deploying the micro services within cloud foundry, we need a monitoring tool that was integrated with Cloud Foundry. The choices we had were the following: New Relic,® AppDynamics and Sensu. The choice to go with NewRelic was based on its large user base and its customization capability. New Relic was also offered in the Pivotal Web Services marketplace for monitoring Cloud Foundry applications and that helped prove its adoption capability as well. Unlike JMeter, this involved licensing costs and hence that had to be factored in as well. ElasticSearch/Logstash/Kibana(ELK) stack As we decided to go with JMeter, we now need a way to store and visualize the results generated by JMeter. The choices we had were to go with Splunk or ELK. Splunk had licensing cost associated with it. Since ELK was open source and provided robust customization capabilities, we decided to go with it. ELK stands for ElasticSearch/Logstash/Kibana. Conclusion I hope that gave you all some insights into the tools and technologies that we have adopted for performance engineering the Predix micro services. This can also be extended to do performance testing of services and applications built on top of these micro services. We are also working to build this performance testing into the CI/CD pipeline. We will talk about that in another podcast episode as well. Thanks for listening and I hope you all enjoyed this podcast episode. We would love to have your feedback on this podcast and your suggestions on topics in performance engineering you would like to hear more about. You can reach me at balan@ge.com with your comments and suggestions. Until next time, this is Siva Balan, signing off from San Ramon, CA. Thank you.
Internet and technology 9 years
0
0
3
05:42

IoT Goes Industrial - Episode 3

Dario Amiri joins us this episode for a discussion on Security. He explains the two Predix security services of UAA (User Account and Authorization) and ACS (Access Control Service).
Internet and technology 10 years
0
0
0
32:04

IoT Goes Industrial - Episode 2

Steve Winkler joins us this episode for a discussion on Open Source Software (OSS). He believes that OSS ate the traditional software model that we all are familiar with. GE is doing some really clever things with OSS and Cloud Foundry.
Internet and technology 10 years
0
0
0
21:40

IoT Goes Industrial - Episode 1

In this IoT Goes Industrial episode, Bett Bolhoefer talks about how a developer can jumpstart their development on the Predix platform using the Getting Started material on https://www.Predix.io
Internet and technology 10 years
0
0
0
07:18
You may also like View more
Applelianos Podcast de tecnología, principalmente de noticias sobre el mundo de Apple, con un grupo de compañeros expertos en los temas expuestos, descubre la información adecuada con nosotros. Updated
Lunaticoin Entusiasta Bitcoin | Conecto con personas de habla hispana con perfil propio dentro del mundo #bitcoin y comparto su valor | Colaborador en @EstudioBitcoin Updated
TISKRA Podcast sobre tecnología de consumo y software. Análisis estratégico del mundo Apple, Google, Microsoft, Tesla y Amazon así como de todos aquellos productos de entretenimiento y su posible impacto económico y social. Conducido por @JordiLlatzer Updated
Go to Internet and technology