Is Golang the peak for developing backend systems?
In this blog we will understand what makes golang a killer choice for developing performant and scalable backend systems.
Deepanshu
@dipxsy
Building Performant Backends: Why Go (Golang) Outshines Node.js
In today's world of web development, where speed and optimized performance are the bare minimum for any infrastructure, having a scalable yet performant backend system is essential as your application grows and user demands increase.
This is where Go (or Golang) steps in. Go is not just another programming language in a crowded ecosystem; it has carved out a unique niche in the web development world.
"Why Consider Go When My Node.js App Works Fine?"
The answer lies in Go's concurrency model, built around goroutines, coupled with an efficient scheduler and resource manager. I used to be a Node.js enthusiast, seamlessly writing APIs with a touch of Hono. However, when I conducted performance tests to evaluate how Node APIs handle concurrent requests, the results were predictable—not great.
The failure rate for Node.js APIs under load was significant, and without robust tools like Kubernetes and advanced DevOps strategies, maintaining good performance was challenging.
Then I discovered Golang. Running the same tests, I was genuinely impressed. Go handled thousands of concurrent requests without a single failure. The comparison was fair, using the same metrics and conditions for both Node.js and Go.
A Simple Go Server Example
Here’s an example of a basic Go server with goroutines, showcasing its ability to handle concurrent requests seamlessly:
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello from %s!\n", r.URL.Path)
}
func main() {
http.HandleFunc("/", handler)
fmt.Println("Server running on port 8080...")
http.ListenAndServe(":8080", nil)
}
Each incoming request is handled by a goroutine, enabling seamless scaling with minimal memory overhead.
Why Go Outperforms Node.js for APIs
1. Efficient HTTP Server
Go’s built-in net/http
package eliminates the need for external dependencies, providing a simple yet powerful server that handles concurrent connections out of the box.
2. Optimized Resource Management
Garbage Collector: Go’s garbage collector minimizes latency, ensuring consistent performance.
Non-blocking I/O: Efficiently handles heavy workloads without blocking.
3. Superior Networking Stack
Go simplifies connection pooling, keep-alive handling, and HTTP/2 support, making it ideal for high-traffic applications.
Performance Test: Go vs. Node.js
Here’s the Go server I used for the performance test:
package main
import (
"fmt"
"log"
"net/http"
"os"
"time"
"github.com/go-chi/chi"
"github.com/go-chi/chi/v5/middleware"
"github.com/joho/godotenv"
)
func main() {
// Load environment variables
err := godotenv.Load()
if err != nil {
fmt.Println("Error loading .env file")
}
port := os.Getenv("PORT")
router := chi.NewRouter()
// Middleware
router.Use(middleware.Logger)
router.Use(middleware.Recoverer)
router.Use(middleware.Timeout(30 * time.Second))
// Routes
router.Get("/", func(w http.ResponseWriter, r *http.Request) {
log.Println("Handling request for /")
w.Write([]byte("Welcome to the multi-threaded web server"))
})
router.Get("/hello/{name}", func(w http.ResponseWriter, r *http.Request) {
name := chi.URLParam(r, "name")
log.Printf("Handling request for /hello/%s\n", name)
w.Write([]byte(fmt.Sprintf("Hello, %s!", name)))
})
router.Get("/slow", func(w http.ResponseWriter, r *http.Request) {
log.Println("Simulating a slow request...")
time.Sleep(5 * time.Second)
w.Write([]byte("This was a slow request, but handled concurrently!"))
})
router.Get("/fast", func(w http.ResponseWriter, r *http.Request) {
log.Println("Handling request for /fast")
w.Write([]byte("This is the fast endpoint!"))
})
// Start server
fmt.Println("Server is running on port:", port)
http.ListenAndServe(":"+port, router)
}
Testing Concurrent Requests
To evaluate performance, I used a Bash script to send concurrent requests and record metrics.
Bash Script for Testing
#!/bin/bash
ENDPOINTS=("http://localhost:8080/slow" "http://localhost:8080/fast" "http://localhost:8080")
NUM_REQUESTS=10000
SUCCESS_FILE=$(mktemp)
FAILURE_FILE=$(mktemp)
TIME_FILE=$(mktemp)
cleanup() {
rm -f "$SUCCESS_FILE" "$FAILURE_FILE" "$TIME_FILE"
}
trap cleanup EXIT
test_endpoint() {
local endpoint=$1
local id=$2
start_time=$(date +%s%3N)
http_status=$(curl -s -w "%{http_code}" -o /dev/null "$endpoint")
end_time=$(date +%s%3N)
duration=$((end_time - start_time))
if [[ "$http_status" == "200" ]]; then
echo "Request $id to $endpoint succeeded in ${duration}ms."
echo "$duration" >> "$TIME_FILE"
echo "$endpoint" >> "$SUCCESS_FILE"
else
echo "Request $id to $endpoint failed in ${duration}ms."
echo "$endpoint" >> "$FAILURE_FILE"
fi
}
echo "Starting tests..."
for endpoint in "${ENDPOINTS[@]}"; do
for ((i=1; i<=NUM_REQUESTS; i++)); do
test_endpoint "$endpoint" "$i" &
done
done
wait
# Results summary
total_requests=$((NUM_REQUESTS * ${#ENDPOINTS[@]}))
success_count=$(wc -l < "$SUCCESS_FILE")
failure_count=$(wc -l < "$FAILURE_FILE")
average_time=$(awk '{sum+=$1} END {print sum/NR}' "$TIME_FILE")
echo "Total Requests: $total_requests"
echo "Successful Requests: $success_count"
echo "Failed Requests: $failure_count"
echo "Average Response Time: ${average_time}ms"
Results
The test sent 10,000 requests to each endpoint. The Go server handled the load impressively:
Minimal failures
Consistent response times
Outstanding scalability
Conclusion
If your application requires handling heavy traffic or scaling efficiently, exploring Go for your backend systems is highly recommended.
This was my first tech blog—thanks for reading! Goodbye!