1、 Background
The team has recently been frequently subjected to network attacks, which has attracted the attention of the technical leader. The author has a relatively better understanding of security in the team, so I spent some time editing a security development self check list. I think there should also be many readers who need it, so I shared it.
2、 Encoding Security
2.1 Input Validation
Explanation of inspection items
Overview: Any data submitted from the client, such as URLs and parameters, HTTP headers, Javascript, or other embedded code, is considered untrusted data. Each component or functional boundary within or outside the application is verified as a potential malicious input
If untrusted data on the whitelist can be set for whitelist verification, all data that matches the whitelist should be accepted and other data should be blocked
When untrustworthy data on the blacklist contains bad input characters, such as empty bytes (% 00), line breaks (% 0d,% 0a, r, n), path characters (../or..), it is recommended to directly block the data. If the data needs to be accepted, different methods of purification should be performed
Normalize the purification and validation of untrusted data, such as converting relative paths such as directory traversal (./or) into absolute path URL decoding.
When various purification processes are required to purify untrusted data, malicious characters should be completely removed, leaving only known and safe characters, or they should be appropriately encoded or "escaped" before processing. For example, HTML encoding of data when outputting to application pages can prevent script attacks
Legitimacy verification of untrusted data includes: data types such as characters, numbers, dates, and other characteristics; Data Scope; Data length, etc
Before preventing SQL injection of untrusted data into backend database operations, it is recommended to use parameterized queries with positive angles to avoid SQL injection
When the untrusted data in the file verification is a decompressed file, if the file is located outside the service directory or the file size exceeds the limit, it should be rejected for processing
After the untrusted data in access control passes the above verification, it should also be confirmed whether the submitted content matches the user's identity to avoid unauthorized access
2.2 Output Verification
Explanation of inspection items
Overview: Considering the security of the target compiler, correctly encoding all output characters
When untrustworthy data is output to the front and back pages in the encoding scenario, relevant encoding is performed based on the output scenario, such as HTML entity encoding and UR encoding
The purification scenario focuses on operating system commands, SQL, and LDAP queries, purifying all sensitive information output, such as bank cards, phone numbers, system information, etc
2.3 SQL injection
Explanation of inspection items
Overview: Verify the validity of user input before entering the SQL operation of the application.
Parameterized processing uses parameterized queries (PDO for PHP, PreparedStatement for Java, and Sqlparameter for C #) to escape sensitive characters such as' 'and then perform SQL operations.
Minimize authorization: Configure minimum database operation permissions for each application, prohibit database operations with administrator permissions, and limit the number of operation connections.
Sensitive data encryption uses encryption, hashing, or obfuscation methods to store sensitive information confidentially, reducing the risk of data leakage caused by potential vulnerabilities
Prohibit error echo. Prohibit the system from returning prompts containing sensitive information when turning on Debug mode or exceptions. It is recommended to use a custom error information template. Exception information should be stored in the log for security auditing
2.4 XSS Cross Station
Explanation of inspection items
Input validation filters and escapes input data, including but not limited to dangerous special characters such as<>"9% 0&+V"
Output encoding, input data output to different scenarios for different forms of encoding. If output to HTML tags, perform HTML encoding. If output to URLs, perform URL encoding. If output to JS, perform Script encoding. If output to Stylet, perform CSS encoding
2.5 XML injection
Explanation of inspection items
Input validation filters parameters submitted by users, such as special characters such as<,>&, when referencing data internally or externally in XML documents. Prohibit loading external entities and error reporting
Suggest output encoding to escape XML element attributes or content
2.6 CSRF Cross Site Request Forgery
Explanation of inspection items
Use Token to add the Token field generated by the session to the form for important operations. After submission, verify the field on the server
Secondary verification requires users to perform secondary identity verification such as password, image verification code, SMS verification code, etc. when submitting key forms
Referer validation verifies whether the Referer: field in the user request has cross domain submissions
3、 Logical security
3.1 Identity verification
Explanation of inspection items
Overview: All access to non-public web pages and resources must follow a standard and universal authentication process on backend services
The user's credentials must be encrypted and submitted through POST when submitting credentials. It is recommended to use the HTPS protocol to encrypt the channel and authenticate the server
Safely handle failed identity verification with error prompts, such as using "username or password error" to indicate failure and prevent excessive information leakage
The exception handling login entrance should have measures to prevent brute force or database collision guessing (using the leaked password dictionary for batch login attempts). If more than one verification failure occurs, Turing testing will be automatically enabled. If more than one verification failure occurs, account locking mechanism will be automatically enabled to restrict access
When performing key operations such as account password modification, data update, transaction payment, etc., the second verification starts with the Turing test, and then performs the second verification of the user's identity. The transaction payment process should also form a complete evidence chain, and the transaction data should be digitally signed by the initiator
For highly sensitive or core business systems with multi factor authentication, it is recommended to use multi factor authentication mechanisms such as SMS verification codes, software and hardware tokens, etc.
3.2 SMS verification
Explanation of inspection items
The complexity of generating a verification code should be at least 6 digits or letters, and it should be used once. It is recommended that the validity period should not exceed 180 seconds.
The verification code limit is set to once every 60 seconds for users at the front and back ends. It is recommended that each user obtain up to 10 SMS messages per day
Add safety tips: At least include information such as the function of this operation, the verification code sending number, and whether it is an individual's own risk of operation.
Voucher verification prohibits the return of verification codes in the response. The server simultaneously verifies credential information such as passwords and SMS verification codes to prevent vulnerabilities in multi-stage authentication bypass.
3.3 Turing test
Explanation of inspection items
The complexity of generating verification codes should be at least 4 digits or letters, or verification methods such as puzzles should be used once, with a recommended validity period of no more than 180 seconds
Suggestion for using verification codes from the perspective of user experience and security, which can be designed to automatically pop up the verification code input box for verification when the user enters the wrong password once
Verification code verification is prohibited from returning verification codes in the response. Verification code verification should be performed on the server side
3.4 Password Management
Explanation of inspection items
When setting a password, it should meet the requirements of 8 or more digits in length, including uppercase and lowercase letters, numbers, and special characters. User password settings must undergo backend verification, and it is not allowed to set sensitive passwords that do not meet the complexity requirements.
When storing user passwords, a hash algorithm (such as SHA1) should be used to calculate the summary values of user passwords and unique random salt values (Salt) to store their summary and Sat values. It is recommended to store these two values separately
Password modification: When a user changes their password, the modification operation requires authentication through both their phone number and email address. When changing the password, the user should be notified via text message or email if the operation was done by themselves, informing them of the security risks
When retrieving a user's password, the backend needs to perform a secondary verification of the registered phone number or email. The verification code and verification link should be sent to the pre registered address, and the validity period should be set to prevent brute force cracking. Security questions should support as many random questions as possible. In multiple verification operations, it is necessary to sort each verification mechanism to prevent security risks such as skipping the previous verification mechanism and proceeding directly to the final step of authentication
Password usage: In application development, it is prohibited to set universal passwords, hardcoded plaintext passwords, operate using database administrator accounts, operate using different user public accounts, or output passwords to log files or consoles
3.5 Session Security
Explanation of inspection items
To prevent session hijacking during application authentication, it is recommended to continue using HTTPS connections and use the HTTPS protocol for authentication sites. If the connection is redirected from preventing session hijacking HTTP to HTTPS, the session identifier needs to be regenerated. Prohibit back and forth conversion between HTTP and HTTPS, which may lead to session hijacking
When setting session identifier security settings for session cookies, correctly set the 'Httpoly' attribute (prohibiting programs from adding 5 scripts to read cookie information) 'Secure' attribute (prohibiting cookie security settings and preventing cookies from being transmitted to the server through HTTP connections for verification); The "Domain" attribute (the authorized access domain name that can be specified during cross domain access) and the "Path" attribute (the directory path that can be accessed by authorization).
Cookie security settings should place the session identifier in the header information security of the HTP or HTPS protocol, prohibiting transmission with GET parameters and recording the session identifier in error messages and logs
A complete session management mechanism has been implemented on the server side to prevent CSRF attacks, ensuring that every CSRF session request is protected from legitimate authentication and permission control, and preventing cross site request forgery (CSRF) vulnerabilities from occurring.
The session validity period should be set based on balancing risk and functional requirements. Regularly generating a new session identifier and invalidating the previous session validity identifier can alleviate the risk of session hijacking caused by the theft of the original active identifier.
The session logout and logout function is applied to all authenticated web pages. After logging out of a user's session, they should immediately clear the session related information and terminate the relevant session connection
3.6 Access Control
Explanation of inspection items
The control method separates the logical code of access control from other code in the application, and the server manages access control based on session identification.
Control management restrictions: Only authorized users can access protected URLs, files, services, application data, configurations, direct object references, etc
Interface management restrictions allow only authorized external applications or interfaces to access protected local programs or resources
Permission change When permission changes occur, a log should be recorded and the user should be notified whether the operation was done by themselves, informing them of the existing security risks
3.7 File Upload Security
Explanation of inspection items
When uploading files for identity verification, verify the legitimacy of the user's identity on the server
When uploading files, legitimacy verification is performed on the server to verify the file attributes. The whitelist format checks the document type (such as the file's trailing name, file header information verification, etc.) and size (image verification length, width, and pixels, etc.).
When saving files in storage environment settings, they should be saved in a document server that is independent of the application environment (configured with an independent domain name), and the saved directory permissions should be set to non executable
When hiding the file path for file saving, the successfully uploaded file needs to be randomized and renamed, and it is prohibited to return saved path information to the client.
When downloading files through file access settings, they should be downloaded in binary format. It is recommended not to provide direct access (to prevent Trojan files from being executed directly)
3.8 Interface Security
Explanation of inspection items
Network restrictions on the caller's network, such as verification through technical measures such as firewalls, host hosts, and Nginx deny.
Identity authentication is used to verify the identity of the caller, such as key, secret, certificate, and other technical measures. It is prohibited to share credentials
Data security for integrity verification calls, using SHA1 and other digest operations for digital signature of all parameters to identify data tampering
The parameter check for legitimacy verification calls, such as whether the parameters are complete, whether the timestamp and token are valid, and whether the calling permission is legal
Availability requirements: Service requirements for calling, ensuring data consistency by ensuring equal power of calls, and limiting call frequency and validity period
Exception handling of calls, real-time detection of call behavior, and timely blocking of exceptions upon discovery
4、 Data security
4.1 Sensitive Information
Explanation of inspection items
When transmitting sensitive information, it is prohibited to include sensitive information such as username, password, card number, etc. in the GET request parameters. It is recommended to use TSL encryption transmission for all sensitive information.
When the client saves sensitive information, the automatic filling function in the form and saving sensitive information in clear text are prohibited
When the server saves sensitive information, it is prohibited to hard code sensitive information in the program. Sensitive information such as user password, ID number, bank card number, and cardholder name are stored in clear text. Sensitive data temporarily written into memory or files should be cleared and released in a timely manner
When maintaining sensitive information, it is prohibited to upload source code or SQL libraries to open source platforms or communities, such as Github, Open Source China, etc.
When displaying sensitive information, if it is displayed on a web page, the sensitive fields should be desensitized on the backend server.
4.2 Log specifications
Explanation of inspection items
The recording principle ensures that the log records contain important application events, but it is prohibited to save sensitive information such as session identification, account passwords, documents, etc
Event types record all authentication, access operations, data changes, critical operations, management functions, logout records, and other events.
Event request logs typically record the occurrence time of each event, the IP address from which the request was made, and the user account (if verified).
Log protection logs are strictly protected to prevent unauthorized read or write access.
4.3 Exception Handling
Explanation of inspection items
The fault-tolerant mechanism should include a complete functional exception capture mechanism, such as a try catch block, in typical locations such as files, networks, databases, command operations, etc. during application implementation. Once an exception occurs, the occurrence time, code location, error details, and possible users who triggered the error should be fully recorded in the log. Important system serious exceptions should have an alarm mechanism, and the system operator should be notified in a timely manner to troubleshoot and fix the problem
Custom error messages In a production environment, applications should not return any system generated messages or other debugging information in their responses. Configure the application server to handle application errors that cannot be handled in a customized manner and return custom error messages
Hiding user information prohibits the disclosure of user privacy information during system abnormalities, typically including identity information, personal address, phone number, bank account, communication records, location information, etc
Hiding system information prohibits the disclosure of sensitive system information (user accounts and passwords, system development keys, system source code, application architecture, system accounts and passwords, network topology, etc.) in case of system abnormalities.
When an exception occurs, the recovery method for abnormal states should restore the previous object state, such as rollback operations when business operations fail. When object modification fails, the original state of the object should be restored to maintain consistency in the object state
5、 Host Security
5.1 I/O operation
Explanation of inspection items
When creating files in a multi user system, appropriate access permissions should be specified to prevent unauthorized file access. The read/write/executable permissions of files in the shared directory should use a whitelist mechanism to minimize authorization.
Data access checks prevent the unauthorized use of encapsulated data objects, and set reasonable data cache sizes to prevent system resource depletion,
Application file processing: Files created during the operation of the application program need to be set with permission (read, write, executable), and temporary files need to be deleted in a timely manner
5.2 Operating Environment
Explanation of inspection items
Minimize open ports and close unnecessary ports and services for the operating system
The backend service management (such as data caching and storage, monitoring, business management, etc.) is limited to internal network access, and authentication and access control must be set up for those open to the public network.
Environment configuration uses secure and stable operating system versions, various application frameworks of web server software, database components, etc
Sensitive code processing places client sensitive code (such as software package signing, username and password verification, etc.) in software packages such as o to prevent tampering.
Close the debugging channel and the production code does not contain any debugging code or interfaces
HTTPS certificate or other encryption transmission measures for communication security configuration website

Swoole4 provides a powerful CSP co programming mode for the PHP language. The bottom layer provides three keywords, which can easily implement various functions.
The PHP coroutine syntax provided by Swoole4 is borrowed from Golang, and we pay tribute to the GO development team
The PHP+Swoole process can complement Golang very well. Golang: static language, rigorous and powerful with good performance, PHP+Swoole: dynamic language, flexible, simple, and easy to use
This article is based on Swoole-4.2.9 and PHP-7.2.9 versions
Go: Create a collaboration
Chan: Create a channel
Defer: Delay the task, executed when the collaboration exits, first in, second out
The underlying implementation of these three functions is all memory operations, without any IO resource consumption. Just like PHP's Array, it is very cheap. You can use it directly if needed. This is different from socket and file operations, which require requesting port and file descriptors from the operating system, and reading and writing may result in blocked IO waiting.
Concurrency of collaborative processes
Using the go function allows a function to execute concurrently. During the programming process, if a certain piece of logic can be executed concurrently, it can be placed in the go process for execution.
Sequential execution
function test1()

echo "b";


function test2()

echo "c";


Execution results:
htf@LAPTOP-0K15EFQI:~$ time php b1.php
real 0m3.080s
user 0m0.016s
sys 0m0.063s
In the above code, test1 and test2 will be executed sequentially, taking 3 seconds to complete.
Concurrent execution
By using go to create a coroutine, two functions, test1 and test2, can be executed concurrently.


go(function ()

echo "b";


go(function ()

echo "c";

The function of Swoole Runtime: enableCoroutine() is to switch PHP's functions such as stream, sleep, pdo, mysqli, redis, etc. from synchronous blocking to asynchronous IO of coroutines
Execution results:
bchtf@LAPTOP-0K15EFQI :~$time php co.php
Real 0m2.076s
User 0m0.000s
Sys 0m0.078s
htf@LAPTOP-0K15EFQI :~$
You can see that the execution was completed in just 2 seconds.
The sequential execution time is equal to the total execution time of all tasks: t1+t2+t3
The concurrent execution time is equal to the maximum value of all task execution time: max (t1, t2, t3,...)
Collaborative communication
With the go keyword, concurrent programming becomes much simpler. At the same time, it also brings new problems. If there are two concurrent processes executing, and the other process needs to rely on the execution results of these two processes, what if this problem is solved?
The answer is to use channels, which can be created using new channels in the Swoole4 protocol. A channel can be understood as a queue with its own coordinated scheduling. It has two interfaces: push and pop:
Push: Write content to the channel. If it is full, it will enter a waiting state and automatically recover when there is space
Pop: Read content from the channel. If it is empty, it will enter a waiting state and automatically recover when there is data
Using channels can facilitate concurrency management.
$chan=new chan (2);

Synergy 1

go (function () use ($chan) {

$result = [];
for ($i = 0; $i < 2; $i++)
    $result += $chan->pop();



go(function () use ($chan) {
$cli = new Swoole\Coroutine\Http\Client('www.qq.com', 80);

   $cli->set(['timeout' => 10]);
   'Host' => "www.qq.com",
   "User-Agent" => 'Chrome/49.0.2587.3',
   'Accept' => 'text/html,application/xhtml+xml,application/xml',
   'Accept-Encoding' => 'gzip',

$ret = $cli->get('/');
// $cli->body The response content is too large, here we use the HTTP status code as a test
$chan->push(['www.qq.com' => $cli->statusCode]);


go(function () use ($chan) {
$cli = new Swoole\Coroutine\Http\Client('www.163.com', 80);
$cli->set(['timeout' => 10]);

   'Host' => "www.163.com",
   "User-Agent" => 'Chrome/49.0.2587.3',
   'Accept' => 'text/html,application/xhtml+xml,application/xml',
   'Accept-Encoding' => 'gzip',

$ret = $cli->get('/');
// $cli->body The response content is too large, here we use the HTTP status code as a test
$chan->push(['www.163.com' => $cli->statusCode]);
results of execution:
htf@LAPTOP-0K15EFQI:~/swoole-src/examples/5.0$ time php co2.php
array(2) {

real 0m0.268s
user 0m0.016s
sys 0m0.109s
htf@LAPTOP-0K15EFQI :~/swoole src/examples/5.0$
Here, three processes were created using Go, with Process 2 and Process 3 requesting the homepage of qq.com and 163.com, respectively. Process 1 needs to obtain the result of the HTTP request. Here, Chan is used to implement concurrency management.
Coprocess 1 loops twice to pop the channel, as the queue is empty, it will enter a waiting state
After the execution of process 2 and process 3 is completed, the data will be pushed. Process 1 receives the results and continues to execute downwards
Delayed Tasks
In co programming, it may be necessary to automatically perform some tasks and perform cleaning work when the co program exits. Register similar to PHP_ Shutdown_ Function can be implemented using defer in Swoole4.


go(function () {

echo "a";
defer(function () {
    echo "~a";
echo "b";
defer(function () {
    echo "~b";
echo "c";

results of execution:
htf@LAPTOP-0K15EFQI:~/swoole-src/examples/5.0$ time php defer.php
real 0m1.068s
user 0m0.016s
sys 0m0.047s
The Go+Chan+Defer provided by Swoole4 brings a new CSP concurrent programming mode to PHP. Flexibly using the various features provided by Swoole4 can solve the design and development of various complex functions in work.

Branch operation
Git branch creating branch
Git checkout - b Create and switch to the newly created branch
Git checkout switching branches
Git branch view branch list
Git branch - v View the last operation of all branches
Git branch - vv View current branch
Git brabch - b branch name origin/branch name Create remote branch to local
Git branch -- merged View other branches and branches that have been merged with the current branch
Git branch -- no merged View branches that have not been merged with the current branch
Git branch - d branch name delete local branch
Git branch - D branch name forcibly deleting branch
Git branch origin: branch name delete remote warehouse branch
Git merge branch name merges the branch onto the current branch
Staging operation
Git stash staging current modifications
Git stash apply restores the most recent temporary save
Git stash pop restores temporary storage and deletes temporary records
Git staging list to view the staging list
Git stash drop temporary storage name (example: stash @ {0}) to remove a certain temporary storage
Git stash clear clear
fallback action
Git reset -- hard HEAD ^ Fallback to previous version
Git reset -- hard ahdhs1 (commit_id) fallback to a certain version
Git checkout -- file to undo the modified file (if the file is added to the staging area, it will be rolled back to the staging area; if the file is added to the version library, it will be restored to its state after being added to the version library)
Git reset HEAD file to recall files from the staging area and modify them to the workspace
Label operation
Add tags to the git tag tag name (default for the current version)
Git tag tag name commit_ ID tags a submission record
Git tag - a tag name - m 'description' Create a new tag and add a comment
Git tag List all tags
Git show tag name to view tag information
Git tag - d tagname delete local tag
Git push origin tag name push tags to remote warehouse
Git push origin -- tags Push all tags to remote warehouse
Git push origin: refs/tags/tag name Delete tags from remote warehouse
Other operations
General Operations
Git push origin test pushes local branches to remote warehouses
Git rm - r -- cached file/folder name cancel file under version control
Git reflog to obtain executed commands
Git log -- graph View branch merge graph
Git merge -- no ff - m 'merge description' Branch names are not merged using the Fast forward method, and merging records can be seen by using this method
Git check ignore - v file name view ignore rule
Git add - f filename force file submission
Git Create Project Warehouse

  1. Git init initialization
  2. Git remote add origin URL associated with remote warehouse
  3. Git pull
  4. Git fetch to obtain all branches from the remote warehouse to the local location
    Ignore files that have been added to the version library
  5. Git update index -- assume-unchanged file Ignore individual files
  6. Git rm - r -- cached file/folder name (. Ignore all files)
    Cancel Ignoring Files
    Git update index -- no issue unchanged file
    Pull and upload password free
    Git config -- global credential. helper store

The Swoole open source project has a history of nearly 7 years since its first version was released in 2012. During these seven years:
8821 code changes submitted
287 versions released
Received and resolved 1161 issue feedback
Merge 603 pull requests
100 developers contributed code in total
Harvested 11940 stars at GitHub
Synergetic process
In 2018, we launched a new version of Swoole4. Prior to this, the main programming method for Swoole was synchronous blocking mode or asynchronous callback. The new CSP programming based on coprocessor implementation has gradually become the only recommended programming mode we use. Collaborative programming greatly simplifies the complexity of asynchronous programming. Using Swoole4 coroutine is both simple and powerful. In the future version of Swoole5, we plan to remove non coprocessing related features and code, reduce historical burden, improve stability, reduce complexity, reduce unnecessary options, and achieve pure coprocessing.
In the past 6 years, our team has mainly focused on part-time development, with most of the team members coming from top domestic internet companies such as Tencent, Alibaba, Didi, Baidu, 360, and Xiaomi. Some of them are foreign PHP developers, and even Dmitry Stogov, the author of the ZendVM kernel for the PHP language, has contributed code to Swoole. In addition, we have also recruited some college students to write code for Swoole, gradually cultivating the younger generation of developers.
In July 2018, we established a full-time development team focused on the development of the Swoole kernel, as well as the native components and ecosystem of the Swoole Cloud cloud. Say goodbye to the past reckless team and transform into a professional open source technology research and development team.
Our goal is to make the Swoole project an industrial level technology such as Node.js and Go, and a cornerstone of the PHP programming language in asynchronous IO and network communication.
R&D management
After establishing a full-time R&D team, we gradually established a very comprehensive R&D management system to improve the software quality of Swoole. Mainly including the following aspects:
Test Driven (TDD)
Now we are investing a lot of effort in implementing unit testing scripts, pressure testing scripts, and automated testing to improve unit testing coverage. Currently, there are 680 test cases and 17 pressure testing projects, and the compilation and testing results of each Commit and Pull Request can be seen on the Travis CI platform.
The research and development work is also based on TDD. When developing new features, refactoring, and bug fixes, corresponding unit test scripts will be written first, covering all scenarios of code changes.
unit testing
Code Review
Conduct cross code reviews and code reviews among team members, and fully evaluate and discuss the details of code changes.
For major changes, a team review will be conducted, spending hours or even days discussing the details of each line of code change.
RFC mechanism
For non bug fixes, non performance improvements, non refactoring, new features, or changes that may change underlying behavior, we will proceed in four steps.
Initiate an RFC proposal, https://github.com/swoole/rfc... The proposal will provide a detailed explanation of the causes and consequences of this change, related configuration items, scope of impact, usage methods, and examples.
Proposal discussion, we will conduct a thorough discussion of the proposal, delve into the details, analyze the pros and cons, and refine the details. After all issues are discussed clearly, the project is finally approved and implementation begins.
The development leader creates a Git branch, writes unit test scripts, writes code, implements all the content in the proposal, and ultimately initiates a Pull Request
Cross review, check the code, propose improvement suggestions, provide feedback to the development leader, and continue to improve the details. Finally merged into the backbone.
The entire process was conducted publicly on the GitHub platform, and anyone interested in the Swoole project can participate.
Swoole RFC
Grayscale test
To ensure the stability of the official version, we will conduct grayscale tests on internal projects before release to verify the stability of the new version.
In addition, we have established contact with most authors of the Swoole framework, and the new version will be sent to authors of major frameworks for early trial. Significant underlying changes or incompatibilities will be communicated in advance with other open-source project authors on Swoole.
In the past few years, the Swoole project has not been very professional, with many bugs and difficult to use areas, which has also caused many users to step into many pitfalls. After establishing a full-time R&D team in the past six months, we have made rapid progress in R&D management, and the stability and maturity of Swoole are no longer the same as before. Stability always comes first, and we will be more cautious and rigorous in the future to ensure quality.
In the second half of 2018, we conducted multiple refactoring of the underlying code and made many optimizations in terms of code structure, readability, reusability, and encapsulation. Make Swoole software more concise and elegant.
In terms of programming language, we are gradually replacing C language with C++. The object-oriented, intelligent pointers, containers, templates, and other features provided by C++can help us further improve the development efficiency of our team.
We also welcome everyone from PHP to participate in the Swoole project and contribute code.
The documentation of Swoole is also a widely criticized aspect by developers. In 2018, our team gradually increased its investment in documentation. Rewrite and organize the document, add rich example programs, add more detailed illustrations, fix details, remove emotional statements, and be more objective, neutral, and rigorous.
2019 Future
In the new year, we mainly focus on three directions.
Remove non collaborative features, remove unnecessary modules, reduce historical burden, improve stability, reduce complexity, reduce unnecessary options, simplify complexity, and make things simpler.
The Swoole kernel will continue to continuously refactor and streamline, reduce the number of code lines, clean up redundant code, and achieve code reuse as much as possible.
Deep project
By the end of 2018, we had gradually established connections with companies that heavily use Swoole in production environments, including Tencent Cloud, Yuewen, TAL, Momo, Youxin, and others. Understand practical application scenarios and business models, engage in in-depth communication and cooperation, provide suggestions, help enterprise technical teams better solve business problems, and receive feedback to improve the underlying.
Ecological chain
In 2019, we will develop some supporting tools and components based on the Swoole4 collaboration to make up for the shortcomings of PHP in the Cloud Native era ecosystem.

What is high concurrency?
High concurrency is one of the performance indicators of Internet distributed system architecture, which usually refers to the number of requests that the system can process simultaneously per unit of time,
Simply put, it is QPS (Queries per second).
So what are we talking about when we're talking about high concurrency?
What is high concurrency?
Here is the conclusion:
The basic manifestation of high concurrency is the number of requests that the system can process simultaneously per unit time,
The core of high concurrency is the effective exploitation of CPU resources.
For example, if we develop an application called MD5 exhaustive, each request will carry an MD5 encrypted string, and the system will eventually enumerate all the results and return the original string. At this point, our application scenario or business is CPU intensive rather than IO intensive. At this point, the CPU has been doing effective calculations and can even fully utilize the CPU. At this point, discussing high concurrency is meaningless. Of course, we can improve concurrency by adding machines, which is also known as adding CPUs. This is a normal nonsense solution that everyone knows. It is meaningless to talk about adding machines. Without any high concurrency, adding machines cannot solve the problem. If there are, then it means that you haven't added enough machines! 🐶)
For most internet applications, the CPU is not and should not be the bottleneck of the system. Most of the system's time is spent waiting for I/O (hard disk/memory/network) read/write operations to complete.
At this point, some people may say, 'When I watch the system monitoring, the memory and network are both normal, but the CPU utilization is running full.' Why is this?
This is a good question. I will provide practical examples in the following text, emphasizing once again the four words' effective squeezing 'mentioned earlier, which will revolve around the entire content of this article!
Control variable method
Everything is interconnected, and when we talk about high concurrency, every aspect of the system should be matched with it. Let's first review a classic C/S HTTP request process.
As shown by the serial number in the figure:
We will analyze the request through the DNS server and reach the load balancing cluster
The load balancing server will allocate requests to the service layer based on the configured rules. The service layer is also our core business layer, and there may also be some calls to PRC, MQ, and so on
3 and then pass through the cache layer

  1. Finally Persist Data
  2. Return data to the client
    To achieve high concurrency, we need load balancing, service layer, cache layer, and persistence layer to be highly available and high-performance. Even in step 5, we can optimize by compressing static files, HTTP2 pushing static files, and CDN. For each layer here, we can write several books to discuss optimization.
    This article mainly discusses the service layer, which is the part circled in red in the figure. We will no longer consider discussing the impact of databases and caching.
    High school knowledge tells us that this is called the control variable method.
    Further Discussion on Concurrency
    The Evolution History of Network Programming Models
    Concurrency has always been a key and challenging issue in server-side programming. In order to optimize the concurrency of the system, it starts with the initial Fork process, progresses to the process/thread pool, then to the epoll event driven (Nginx, node. js anti human callbacks), and finally to the protocol.
    It can be clearly seen from the above that the entire evolution process is the process of squeezing the effective performance of the CPU.
    What? Not obvious?
    So let's talk about context switching again
    Before discussing context switching, let's clarify the concepts of two nouns.
    Parallel: Two events complete at the same time.
    Concurrency: Two events occur alternately within the same time period, and from a macro perspective, both events occur.
    Threads are the smallest unit of operating system scheduling, while processes are the smallest unit of resource allocation. Due to the serial nature of the CPU, for a single core CPU, there must be only one thread occupying CPU resources at a time. Therefore, as a multitasking (process) system, Linux frequently experiences process/thread switching.
    Before each task runs, the CPU needs to know where to load and run from. This information is stored in the CPU registers and the program counters of the operating system, which are called CPU context.
    Processes are managed and scheduled by the kernel, and process switching can only occur in kernel state. Therefore, resources in user space such as virtual memory, stack, global variables, and the state of kernel space such as kernel stack and registers are called process context.
    As mentioned earlier, threads are the smallest unit of operating system scheduling. At the same time, threads share resources such as virtual memory and global variables of the parent process, so adding their own private data to the parent process's resources is called the thread's context.
    For thread context switching, if it is a thread from the same process, it will consume less resources than switching between multiple processes due to resource sharing.
    It is now easier to explain that switching between processes and threads can result in CPU context switching and process/thread context switching. And these context switches will consume additional CPU resources.
    Further Discussion on Context Switching in Collaborative Processes
    So does the collaboration process no longer require context switching? Yes, but there will be no CPU context switching or process/thread context switching, as these switches are all in the same thread, that is, in user mode. You can even simply understand that switching between coroutine contexts is moving a pointer in your program, and the CPU resources still belong to the current thread.
    For those who need a deep understanding, you can delve deeper into Go's GMP model.
    The ultimate effect is that the collaborative process further squeezes the effective utilization rate of the CPU.
    Go back to the question at the beginning
    At this point, some people may say, 'When I watch the system monitoring, the memory and network are both normal, but the CPU utilization is running full.' Why is this?
    Note that when discussing CPU utilization in this article, the word 'effective' will definitely be added as an attribute. If CPU utilization is running high, it often results in inefficient calculations.
    Taking "the best language in the world" as an example, the typical CGI mode of PHP-FPM involves every HTTP request:
    I will read hundreds of PHP files from the framework,
    Will re establish/release the MYSQL/REIDS/MQ connection,
    Will dynamically interpret, compile, and execute PHP files again,
    They will continuously switch between different PHP FPM processes before switching.

The CGI running mode of PHP fundamentally determines its catastrophic performance on high concurrency.
Finding problems is often more difficult than solving them. After we understand what we are talking about when we are talking about high concurrency, we will find that high concurrency and high performance are not limited by programming languages, they are only limited by your thoughts.
Find the problem, solve the problem! What effect can we achieve when we can effectively squeeze CPU performance?
Let's take a look at the performance difference between the HTTP service of PHP+SWoole and the HTTP service of Java's high-performance asynchronous framework, Netty.
Preparation before performance comparison
What is a swoole
Swoole is an event based high-performance asynchronous and concurrent parallel network communication engine written in C and C++for PHP
What is Netty
Netty is a Java open source framework provided by JBOSS. Netty provides an asynchronous, event driven network application framework and tools for quickly developing high-performance and highly reliable network servers and client programs.
What is the maximum number of HTTP connections that a single machine can reach?
Recalling the relevant knowledge of computer networks, the HTTP protocol is an application layer protocol. At the transport layer, each TCP connection is handshake three times before it is established.
Each TCP connection is identified by four attributes: local IP, local port, remote IP, and remote port.
The TCP protocol header is as follows (image from Wikipedia):
The local port consists of 16 bits, so the maximum number of local ports is 2 ^ 16=65535.
The remote port consists of 16 bits, so the maximum number of remote ports is 2 ^ 16=65535.
At the same time, in the underlying network programming model of Linux, for each TCP connection, the operating system maintains a File descriptor (fd) file to correspond to it, and the limit on the number of fds can be viewed and modified by the ulimit - n command. Before testing, we can execute the command ulimit - n 65536 to modify this limit to 65535.
Therefore, without considering hardware resource limitations,
The maximum number of local HTTP connections is: 65535 local ports * 1 local IP=65535.
The maximum number of remote HTTP connections is 65535 * the number of remote (client) IPs+∞=unlimited~~.
PS: In fact, the operating system may have some reserved ports occupied, so the number of local connections cannot actually reach the theoretical value.
Performance Comparison
Testing Resources
Each Docker container has 1GB of memory and 2 cores of CPU, as shown in the figure:
The Docker Compose arrangement is as follows:


version: "2.2"

container_name: "java8"
hostname: "java8"
image: "java:8"
  - /home/cg/MyApp:/MyApp
  - "5555:8080"
  - TZ=Asia/Shanghai
working_dir: /MyApp
cpus: 2
cpuset: 0,1

mem_limit: 1024m
memswap_limit: 1024m
mem_reservation: 1024m
tty: true


version: "2.2"

container_name: "php7-sw"
hostname: "php7-sw"
image: "mileschou/swoole:7.1"
  - /home/cg/MyApp:/MyApp
  - "5551:8080"
  - TZ=Asia/Shanghai
working_dir: /MyApp
cpus: 2
cpuset: 0,1

mem_limit: 1024m
memswap_limit: 1024m
mem_reservation: 1024m
tty: true    


use Swoole\Server;
use Swoole\Http\Response;

$http = new swoole_http_server("", 8080);

'worker_num' => 2

$http->on("request", function ($request, Response $response) {

//go(function () use ($response) {
    // Swoole\Coroutine::sleep(0.01);
    $response->end('Hello World');


$http->on("start", function (Server $server) {

go(function () use ($server) {
    echo "server listen on \n";

Java Key Code
The source code comes from, https://github.com/netty/netty

public static void main(String[] args) throws Exception {
    // Configure SSL.
    final SslContext sslCtx;
    if (SSL) {
        SelfSignedCertificate ssc = new SelfSignedCertificate();
        sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
    } else {
        sslCtx = null;

    // Configure the server.
    EventLoopGroup bossGroup = new NioEventLoopGroup(2);
    EventLoopGroup workerGroup = new NioEventLoopGroup();
    try {
        ServerBootstrap b = new ServerBootstrap();
        b.option(ChannelOption.SO_BACKLOG, 1024);
        b.group(bossGroup, workerGroup)
         .handler(new LoggingHandler(LogLevel.INFO))
         .childHandler(new HttpHelloWorldServerInitializer(sslCtx));

        Channel ch = b.bind(PORT).sync().channel();

        System.err.println("Open your web browser and navigate to " +
                (SSL? "https" : "http") + "://" + PORT + '/');

    } finally {

Because I only provided CPU resources for the two cores, both services can only start a single work process.
Port 5551 represents a PHP service.
Port 5555 represents a Java service.
Comparison of pressure testing tool results: Apache Bench (ab)
Ab command: Docker run -- rm jordi/ab - k - c 1000- n 1000000
In the benchmark test of conducting 1 million HTTP requests concurrently with 1000,
Java+netty pressure test results:
PHP+SWOOLE pressure test results:
Service QPS response time ms (max, min) Memory (MB)
Java+netty 84042.11 (11,25) 600+
PHP+SWOOLE 87222.98 (9,25) 30+
PS: The figure above selects the best result under three pressure tests.
Overall, the performance difference is not significant, and PHP+swoole services are even slightly better than Java+netty services, especially in terms of memory usage. Java uses 600MB, while PHP only uses 30MB.
What does this mean?
No IO blocking operation, no co process switching will occur.
This only indicates that in the multithreading+epoll mode, CPU performance is effectively squeezed, and you can even write high concurrency and high-performance services using PHP.
Performance Comparison - A Moment of Witnessing Miracles
The above code actually does not demonstrate the excellent performance of the coroutine, as the entire request does not have blocking operations. However, our application often accompanies various blocking operations such as document reading, DB connection/query, etc. Let's take a look at the pressure test results after adding blocking operations.
In both Java and PHP code, I added sleep (0.01)//second code to simulate system call blocking for 0.01 seconds.
The code will no longer be pasted repeatedly.
Java+netty pressure test results with IO blocking operation:
It takes about 10 minutes to complete all the pressure tests...
PHP+swoole pressure test results with IO blocking operation:
Service QPS response time ms (max, min) Memory (MB)
Java+netty 1562.69 (52160) 100+
PHP+SWOOLE 9745.20 (9,25) 30+
From the results, it can be seen that the QPS of the PHP+SWOOLE service based on coprocessing is six times higher than that of the Java+netty service.
Of course, both of these test codes are source code from the official demo, and there are definitely many configurations that can be optimized. After optimization, the results will definitely be much better.
Can you reconsider why the official default number of threads/processes is not set a bit higher?
The more processes/threads, the better. As we discussed earlier, switching between processes/threads can incur additional CPU resource costs, especially when switching between user mode and kernel mode!
For these pressure test results, I am not targeting Java. I mean that as long as you understand the core of high concurrency and find this goal, no matter what programming language you use, as long as you effectively optimize CPU utilization (connection pooling, daemons, multithreading, protocols, select polling, epoll event driven), you can also build a high concurrency and high-performance system.
So, do you now understand what we're talking about when we're talking about high-performance?
Ideas are always more important than results!
Welcome to reprint this article. Please indicate the author and source when reprinting. Thank you!