I am investigating the role networks play in overall performance behavior in scale environments. (So not your single webpage sitting on your own single server)
My first hypothesis is that applications and data seem to drift more and more physically appart. This is driven by multi-cloud environments in which data may be in one cloud and the application processing it in another. Generally, it seems that network performance is becoming increasingly relevant to application performance - especially in highly parallelized computing setups with high speed storage arrays.
My second hypothesis is that the increased use of frameworks leads to less transparency on how each request is actually built and how it loads the network. A simple example is a database connection. Does the framework connect, open, query, close and disconnect ever time? Or only open, query, close? I am really after the network traffic perspective here: how many packets are sent back and forth? How much do the routers in between have to work? This largely depends on how the framework works. There are ample arguments for not closing open connections, but there are also arg7ments for indeed closing them after you got what you wanted - especially in web-native applications.
Any insights on how Zend considers network constraints? Or does Zend assume everything is on a LAN and doesn’t really care?
Thanks for your insights.